• Source:JND

OpenAI has dropped its first freely available models since GPT-2—gpt-oss-120b and gpt-oss-20b. These new open-weight AI models are downloadable, fully customisable, and capable of running entirely on your own hardware. And they’re not just toys—these models bring serious capabilities to the table.

Two Models, Two Use Cases

gpt-oss-120b:

  • 117 billion parameters
  • Uses Mixture-of-Experts (MoE) to activate only 5.1B parameters per task
  • Can run efficiently on a single 80GB GPU

gpt-oss-20b:

  • Lightweight enough to run on laptops with 16GB RAM
  • Performs on par with OpenAI’s o3-mini model

Both models handle reasoning tasks, support web browsing, Python code execution, and can link to OpenAI cloud models for tasks like image generation or advanced multimodal capabilities.

Run It Anywhere

These models are designed for maximum flexibility:

  • Local laptops or desktops
  • On-premise servers
  • Cloud platforms like Azure and AWS
  • Optimized for Windows through collaboration with Microsoft
  • Already available via Hugging Face and other major platforms

What’s Open and What’s Not

The training data and source code will remain proprietary, whereas the model weights (parameters) will be freely available under Apache 2.0. However, developers can still modify, fine-tune, and deploy these models as per their liking:

  • Built-In Tool Use and Safety
  • Native support for tool use like browsing and coding
  • Adjustable chain-of-thought reasoning
  • OpenAI conducted extensive safety testing to reduce misuse risks

These OpenAI models are now the talks of the AI market as it has given developers and researchers a powerful and flexible and customisable toolkit and that too for free. Even if you are experimenting on a laptop or deploying it at a scale, it marks a major shift in local AI capabilities.