AI researchers at the tech giant looked at common patterns in scans and matched them up with the data in the patients' medical records meaning one algorithm could work out whether someone is a smoker or not with a 71 per cent accuracy.
The models can also predict other factors including age, gender and chances of heart attacks or strokes, according to the paper in the Nature Biomedical Engineering journal.
Lily Peng, a product manager at Google Brain, said: "Given the retinal image of one patient who (up to five years) later experienced a major [cardiovascular] event (such as a heart attack) and the image of another patient who did not, our algorithm could pick out the patient who had the cardiovascular event 70 per cent of the time."
The dataset was collected by EyePACS - a programme developed by doctors - and scientists from Stanford University, Google Brain and Verily used more than 1.6 million retinal scans taken from 284,335 patients to train their models. Another 25,996 images were held back to validate the algorithms.
Lily continued: "Traditionally, medical discoveries are often made through a sophisticated form of guess and test -- making hypotheses from observations and then designing and running experiments to test the hypotheses.
"However, with medical images, observing and quantifying associations can be difficult because of the wide variety of features, patterns, colours, values and shapes that are present in real images.
"Our approach uses deep learning to draw connections between changes in the human anatomy and disease, akin to how doctors learn to associate signs and symptoms with the diagnosis of a new disease.
"This could help scientists generate more targeted hypotheses and drive a wide range of future research."