Back to news
AI Research
Apr 24, 2026

DeepSeek introduces new AI models aiming to compete with leading technologies

Apr 24, 2026
AI Summary

DeepSeek has unveiled two versions of its latest large language model, DeepSeek V4, which features significant architectural improvements and a mixture-of-experts approach. The models, V4 Flash and V4 Pro, claim to enhance efficiency and performance while being more affordable than existing frontier models, although they still lag behind in certain knowledge benchmarks.

  • DeepSeek has launched two preview versions of its large language model, DeepSeek V4, including V4 Flash and V4 Pro. These models are updates from the previous V3.2 version and feature a mixture-of-experts architecture with context windows of 1 million tokens.
  • The V4 Pro model has 1.6 trillion parameters, making it the largest open-weight model available, while V4 Flash has 284 billion parameters. Both models are said to be more efficient and performant than V3.2.
  • DeepSeek claims that V4-Pro-Max outperforms open-source peers on reasoning benchmarks and competes with OpenAI’s GPT-5.2 and Gemini 3.0 Pro in certain tasks. Performance in coding benchmarks is comparable to GPT-5.4.
  • However, the models reportedly fall short in knowledge tests compared to frontier models like GPT-5.4 and Gemini 3.1 Pro, indicating a developmental gap of approximately 3 to 6 months.
  • Both models currently support text only, unlike many closed-source models that can handle audio, video, and images.
  • The V4 Flash model is priced at $0.14 per million input tokens and $0.28 per million output tokens, while the V4 Pro model costs $0.145 per million input tokens and $3.48 per million output tokens, making them more affordable than several competitors.
  • The launch follows accusations from the U.S. regarding Chinese IP theft in the AI sector, with DeepSeek facing similar allegations from other companies about copying AI models.
ai modelsefficiencyreasoning benchmarksarchitectural improvementsdeepseek