The AI Faceoff: My Real-World Experiment with ChatGPT and DeepSeek

DeepSeek R1 is a Large Language model (LLM) that has created quite a stir with its cutting-edge technology, computing capabilities, and remarkable features and performance.  The biggest reason, however, for all the fame this AI model is achieving is due to its affordable price. It has challenged the mega giant (and our best friend on most days), ChatGPT. This is interesting because, over the past few years, there hasn’t been any OpenAI that could compete with ChatGPT.

To get a real understanding of how both the models work, I decided to ask the same questions to both the models and then fact check the results. After doing some research, I understood that both LLM’s are incredibly robust with their mathematical and coding abilities. I decided to put my computer science degree to use by giving both models an intermediate-level coding challenge. I asked the models to give me the python code to find all the anagrams of a word in a list. I observed that there are slight differences with their approach, complexities, readability and performance. For example, GPT used a sorting method and DeepSeek used a frequency count method. In simple words, the frequency count method is more impactful to derive the results. I ran a performance benchmark to check which model is more efficient with its findings. I found that for a single word and smaller datasets, the approach by DeepSeek is more impactful, but for larger datasets and words, the ChatGPT approach is extremely efficient. This brings a whole new perspective that GPT can think and operate in ways that are more useful in real world scenarios. It is more adaptable for large scale datasets, AI implementation in businesses, and high scale natural language processing problems that could arise in the future.

I decided to give DeepSeek another go and decided to use it for one of my research projects in the Artificial Intelligence and Machine Learning domain. My project is based on improving breast cancer healthcare facilities by early detection and intervention for underserved populations in Chicago. I have already developed this project, and my main aim is to improve it further by increasing accuracy and efficiency. My model is 98 percent accurate but in healthcare, the false negatives can be disastrous, because if my model classifies even two out of 10 cases as non-malignant when they are in fact malignant, it can be life threatening to the patients. So, I used DeepSeek to try and improve my model’s performance. I found that the suggestions DeepSeek gave me were useful up to an extent. It gave me intricate and complex techniques, which require more learning for implementation. DeepSeek showed a strong understanding of the data and helped me fix some important aspects of my code. It suggested a broader approach overall and was not specific to my dataset. The methods it suggested were generalized with few implementation alternatives.

In conclusion, with Microsoft’s backing and strong presence in Western markets, ChatGPT proves to be more reliable, computationally efficient and well-suited for large-scale applications. DeepSeek is not far behind in the race and is a steadily rising alternative that could be of more use in Asian and Eastern markets. In my opinion, I would prefer to use ChatGPT as it is more flexible for business use cases, analysis, and data modeling, which is more closely related to what I do. However, I do encourage readers to give both a try and decide for themselves.

Sources:

  1. https://www.geeksforgeeks.org/deepseek-vs-chatgpt/
  2. https://www.bbc.com/news/articles/cqx9zn27700o

Related Posts