All articles

Security & Governance

Open-Weight Models Come with 'Great Control, Greater Responsibility' for the Enterprise

AI Data Press - News Team
|
October 2, 2025
Credit: Outlever

Key Points

  • Manish Jain, VP of AI Architecture at Firstsource, describes how OpenAI's GPT-OSS model is narrowing the performance gap with closed-source systems.

  • He explains that with greater control also comes significant operational responsibilities, many of which make universal adoption of the open-weight model unlikely.

  • Jain recommends self-hosting for its data sovereignty and performance benefits, but notes that it often requires significant infrastructure investment.

"If you want greater control over your AI, then you must accept that it comes with greater responsibility. This path is not for everyone. You should only consider self-hosting if you have a compelling business case that justifies the significant investment."

Manish Jain

VP and Global Head of AI Architecture
Manish Jain

Manish Jain

VP and Global Head of AI Architecture
Manish Jain

The release of GPT-OSS is already proving that open-weight models can compete with top-tier rivals. Now, OpenAI's new model performs similarly to closed-source leaders in advanced reasoning, math, and coding. But that doesn't make it a universal solution. In fact, for many leaders, it's creating a strategic dilemma.

To understand what the release means for enterprises, we spoke with Manish Jain, VP and Global Head of AI Architecture at Firstsource, a global outsourcing and offshoring consultancy. With over two decades of experience in technology, including 10 years of architecting enterprise AI, Jain is a seasoned guide to AI adoption. From his perspective, the move is a departure from a long-standing tradition.

Before GPT-OSS, even the most powerful open-weight models struggled to match the performance of closed-source leaders, he explains. But unlike its predecessors, this model has the power and pedigree to compete with players at the highest level.

  • Closing the gap: OpenAI’s first open-weight effort in five years, the model's performance immediately put it in the same league as elite closed-source systems. "The performance is very close to GPT-4o mini on several mathematics and coding benchmarks, a true milestone for open-weight models."

  • Community craze: A pent-up demand for a contribution from market leaders added another layer of excitement, according to Jain. "The community reacted immediately. Within hours, GPT-OSS became the number one downloaded model on Hugging Face because everyone was eager to finally see what a true open-weight release from OpenAI actually looks like."

But for Jain, any conversation about GPT-OSS begins with a blunt assessment of its trade-offs. The appeal of total control must be weighed against its cost in responsibility, he cautions.

  • A word of warning: "If you want greater control over your AI, then you must accept that it comes with greater responsibility. This path is not for everyone. You should only consider self-hosting if you have a compelling business case that justifies the significant investment."

The primary drivers for in-house AI are control and operational independence, Jain continues. The real prize, however, is superior performance. By fine-tuning an open-weight model on private data, he says companies can create a customized engine that yields results significantly better than those of general-purpose models.

  • Keeping data in-house: For organizations in regulated sectors, Jain frames on-premise deployment as a necessity, not a choice. "In many regulated industries like healthcare or finance, compliance is non-negotiable. They have strict requirements that data cannot move to an external location. If I can deploy a model on-premises, I can guarantee our sensitive data never leaves our firewalls."

  • Avoiding dependency: Operational resilience is another key driver of adoption, according to Jain. But outsourcing a critical function introduces unacceptable business risk. "You cannot make your critical business operations dependent on a third-party API. If they go down, your business goes down with them."

  • The final stretch: The most significant advantage of self-hosting is enhanced performance for specialized tasks, he continues. "Fine-tuning a model on your own private data is the 'last-mile stretch.' It is where you go beyond the model's general capabilities and unlock its true value for your specific business."

But those benefits require substantial investment in infrastructure, talent, and governance first. Framing it as a "performance-versus-convenience" trade-off, Jain recommends a rigorous cost-benefit analysis to begin. For him, achieving control demands deep DevOps expertise, a rigid compliance framework, and continuous monitoring.

  • A familiar cycle: A reactive, three-step shuffle is a classic symptom of an organization lacking a proactive AI governance strategy, Jain explains. "I see a familiar cycle play out constantly. First, a company opens up access to a new tool like ChatGPT. Then, a security officer realizes the data risks and blocks it entirely. Only after the block does the organization realize it needs a formal AI policy, so they scramble to write one and then slowly open access back up."

The 120-billion-parameter model is for heavy-duty enterprise use, but its release also included a compact version designed to broaden access. By allowing it to run on commodity hardware, the smaller model helps democratize AI.

  • Power on your PC: Because it removes the dependency on massive GPU clusters, the smaller model makes powerful AI accessible without specialized infrastructure, Jain clarifies. "The smaller models are revolutionary because you can run a powerful LLM on a standard consumer laptop. You are no longer dependent on a massive, expensive cluster of GPUs." However, he also notes the model's current limitations as a text-only system, lacking the multimodal capabilities of the full ChatGPT.

Eventually, the freedom to optimize the model will enable powerful LLMs to run on edge devices, such as smartphones and sensors, Jain concludes. "Because it is open-weight, you have the freedom to modify it. You can apply techniques that shrink the model’s footprint, making it small and efficient enough to fit onto devices with very little memory."

But the most significant impact of the release is its acceleration of innovation across the industry. For Jain, the pace of change is unprecedented. "This release is revolutionary because it fast-tracks what we thought was the future. Capabilities that seemed three to five years away are now possible within the next six to twelve months."