How I used AI to Detect Breast Cancer in Less than 100 Lines of Code
Breast cancer is a huge issue, 1 in 8 women will get breast cancer during their lives and 1 in 36 women will die from it. This is unbelievable!
One of the best ways to increase your probability of surviving breast cancer is to get an early diagnosis. If it’s detected while it’s still small and hasn’t spread, it’s significantly easier to treat. This is why it’s important to get regularly checked by a doctor and make sure you don’t feel anything abnormal.
Currently, in order to check if you have breast cancer, you usually get a mammogram which is an x-ray of your breast tissue. If the doctors see anything suspicious, they will perform a biopsy and take a sample of tissue. They then examine that tissue sample to determine if it is malignant or not.
However, doctors aren’t perfect. They can make mistakes, especially if they just “happen” to be tired when looking at your test and don’t pay enough attention. In addition, a doctor’s time is extremely valuable and examining these tests can occupy a sizeable amount of it. This is a drain on the healthcare system.
Technology has come so far since we first started performing breast cancer screenings, there must be a better way.
Well… it turns out there is! And it uses AI
Using TensorFlow, a machine learning software library, Google Colab, and the Wisconsin Diagnostic Breast Cancer dataset, I was able to construct a breast cancer classifier using a Deep Neural Network. The program I made is able to take biopsy data and report back whether or not the sample is malignant with >98% accuracy.
What are Neural Networks?
The base of my classifier is a Deep Neural Network. Basically, an artificial neural network performs an analysis on input data by mimicking neural networks found in our brains. It is simply a network of layered nodes connected by weights and biases. These weights and biases determine if the output of one node will be able to trigger/activate another node or not. When the input nodes in the input layer (the first layer) are set to given values, they send a chain reaction through the network that eventually triggers the corresponding output nodes.
A deep neural network is simply an artificial neural network with more layers. The theory behind why this should be more effective is that by adding more layers to the network, it’s able to learn more complicated patterns in the data and perform better analyses.
The Program
Input Data
Any neural network requires input data, this is what’s used to perform predictions, train the network, and test its accuracy.
For the input of this network, I decided to use the Wisconsin Diagnostic Breast Cancer dataset. This is a collection of data collected from biopsies and their given diagnoses determined from that data.
In total, the data set contains 569 examples, 357 benign and 212 malignant. In addition, each example contains 30 measurements calculated from each biopsy representing 10 different features of a cell nucleus.
The 10 features are:
- Radius (mean of distances from the centre to points on the perimeter)
- Texture (standard deviation of grey-scale values)
- Perimeter
- Area
- Smoothness (local variation in radius lengths)
- Compactness (perimeter² / area — 1.0)
- Concavity (severity of concave portions of the contour)
- Concave points (number of concave portions of the contour)
- Symmetry
- Fractal dimension (“coastline approximation” — 1)
The reason there are 30 input values, rather than just 10 (1 for each feature), is because multiple values (for example mean, max, and min) were calculated for each feature resulting in more data for better diagnosis.
In code, the preparation of the data looks like following. Because the dataset was already divided into the train and test sets all we have to do is upload them.
The reason we bothered to divide the dataset into the train and test sets, rather than just using the entire dataset at once, is that we need to have the train and test sets separated so that we can use some of the data for training and the rest of it for testing the network. If we tested the network on the same data we trained it on, the network would probably get near 100% accuracy and it would tell us nothing about its ability to detect cancer in biopsies it’s never seen before. It’d be like giving a student the answers to a test and calling it a fair evaluation!
Network Architecture
Once we have the dataset separated and data files set up, we can start thinking about the network’s design.
When it comes to building a neural network there’s no one design fits all solution. There are an infinite amount of ways you could construct the nodes of a neural network and each one will have a slightly different performance.
This means a lot of work with neural networks is just trial and error with different architectures and trying to design one that works best at solving your specific problem.
That said, over time and with experience you start to get the feel for what kind of neural network architectures work well at doing what. In addition, even though each network is different as long as the designs are somewhat similar and conventional they should all perform pretty well regardless, especially with simpler networks like the one I used.
The following code is how I created the network using TensorFlow and Keras.
This network has 5 layers. The first with 30 input nodes, the second with 16 nodes, the third and forth with 8 nodes, and the final output layer with just one node for the output value.
Creating a new layer in the neural network with Keras and TensorFlow is actually super easy, all you have to do is duplicate lines 11 or 12 in the above code and change the units (number of nodes) value.
I recommend downloading my Colab project (link at the end of the article) and playing around with the architecture yourself. It can actually be really fun seeing how different network designs affect your result and it’s also a really good way to get a feel for how neural networks work.
Training the Network
Now that we have created the neural network we have to train it. Training a neural network means teaching it to recognise patterns in a dataset and output the correct predictions.
We do this by adjusting the weights and biases through gradient descent and backpropagation.
Like I mentioned earlier the weights and biases are the connections between the nodes in the network, they determine the chain reaction caused by a given input and what the corresponding output value will be.
When training, we want to adjust these connections so that they produce the most accurate output prediction for each training example.
This is done by taking an input example, running it through the network and then calculating how inaccurate the actual output was compared to the desired output. With this information, we can then tweak all the weights and biases by a small amount to decrease this difference.
The process of calculating how inaccurate the prediction was is known as backpropagation and the process of adjusting the connections to reduce the inaccuracy is known as gradient descent.
We can then repeat this process with all the examples in the training set, and after a few iterations, the model becomes considerably accurate.
Doing this and training the model with TensorFlow and Keras is actually really simple. In code, it looks like this.
It’s a single line!
X_train is the input training data and Y_train is the desired corresponding predictions for that data. Batch size refers to how many examples we want to use for each round of gradient descent (each round of adjusting the connections). I set the batch size to 1 which means we perform backpropagation (calculating the inaccuracy) on each example and then we perform gradient descent for each example as well.
However, you can also set the batch size to a higher number such as ten. If it were set to 10 it would mean that you would perform backpropagation on 10 examples, combine them, and then only perform gradient descent once for all 10 examples, rather than once for each example or 10 times in total.
This would reduce the time it takes the model to train. However, training time isn’t a big issue for this model and doing this might also reduce the model's ability to generalize (perform accurate predictions on new cases) which is undesirable. Therefore 1 seems to be a good choice, but of course, this would be a cool thing to easily test and play around with.
Epochs is the final parameter given to the function. It refers to how many times we want the training algorithm to go through the entire set of examples. Each time the algorithm performs backpropagation and gradient descent on all the training examples is one epoch.
The number of epochs is currently set to 50, which means the program will cycle through the dataset 50 times and perform 50 rounds of training. When I was testing this program, 50 seemed to provide good results with good training time.
Testing
To test our neural network we want to perform predictions on data it hasn’t seen before. This is when we use the data we separated out earlier.
All we have to do is feed this data as input to the network and get back the predictions, which is done with the following code.
Now that we have the predictions to better understand them we can print them out with the following code. This code also counts the number of correct and incorrect predictions.
Voila! From the output of this code (below), we can see that a breast cancer detection model was successfully trained and implemented.
It managed to classify 112 out of 114 biopsies correctly! That's a 98.2% accuracy.
Ideally, we would’ve wanted it to make 0 incorrect predictions but that’s likely something which can be solved by adding more data and constructing a better model architecture.
With machine learning, a lot of the development process is about getting the best dataset for your model and designing a model best suited to solving the specific problem (in this case classifying biopsies as malignant or benign).
The Future
To this day, 1 in 36 women is still destined to die from breast cancer! We need much more advancement in the medical field when it comes to developing better ways to detect and treat diseases. AI offers the potential to do that, and with more research and innovation it has the potential to make a significant impact in the healthcare industry.
Overall, this classification model was quite successful and it’s strong evidence for a future where rather than visiting a doctor to receive a diagnosis we might all just visit AI machines. We would be able to input our medical data and x-rays into the programs and they would diagnose them for us with incredible accuracy and reliability.
This future would strengthen trust in the healthcare industry, improve efficiency, and help save even more people’s lives.
On a related note, if you’d like to donate to the Breast Cancer Society of Canada in support of cancer research, visit this website.
The complete code for this project can be found on my GitHub here.
Thanks for reading! Before you go, if you liked this article feel free to
- Clap this article
- Share it with others who might benefit
- Connect with me on LinkedIn