Y Combinator, the renowned startup incubator, recently hosted an event in San Francisco that brought together founders, venture capitalists, and policymakers to discuss the role of open source artificial intelligence (AI) in the tech industry. The event, which deviated from Y Combinator’s usual focus on startup pitches, highlighted the growing significance of AI in the battle between Big Tech companies and smaller players.
The prominence of open source AI models, particularly after the release of OpenAI’s ChatGPT in late 2022, has sparked discussions about the potential disruption they can bring to the AI landscape. The event saw a strong endorsement of open source AI, with Lina Khan, the chairperson of the Federal Trade Commission (FTC), emphasizing its importance for fostering fair competition and enabling smaller companies to enter the market.
Khan’s remarks resonated with the audience of approximately 200 entrepreneurs, who recognize the influence that dominant technology companies wield when they control the raw materials and infrastructure necessary for AI development. The event also featured US assistant attorney general Jonathan Kanter, who echoed the FTC’s commitment to protecting the interests of “little tech” and ensuring a level playing field.
Y Combinator’s shift towards engaging with policymakers and the DC establishment can be attributed to the appointment of policy expert Luther Lowe, who has facilitated conversations between Y Combinator and Washington. This move has brought a new level of polish and high-profile policy discussions to Y Combinator events, with Khan’s appearance marking her second address to Y Combinator founders since Lowe’s arrival.
Throughout the event, open source AI was hailed as a departure from the app-centric focus of the past decade. The recent release of Meta’s open source AI algorithm, Llama 3.1, further reinforced the growing sentiment among technologists that they no longer wish to be constrained by platform restrictions and arbitrary rules.
However, it is worth noting that OpenAI, despite its name, does not fully embrace open source AI for its most powerful AI systems. Some of the code remains private, and access to its technology is charged at an enterprise level. This has led some proponents of open source AI to argue that smaller, fine-tuned models can yield better results for enterprise tasks compared to larger proprietary models.
While open source AI models offer opportunities for innovation and competition, they also come with inherent risks. Critics caution that the technology’s openness and accessibility can be exploited by malicious actors, who can easily train away safety parameters and misuse the models. Additionally, the notion of “open source” in some AI models has been challenged, as the underlying data and licensing restrictions may still favor the original model-maker.
The event also featured California state senator Scott Wiener, who discussed his controversial AI Safety and Innovation Bill, SB 1047. The bill aims to establish standards for AI models that cost over $100 million to train, prioritize safety testing, protect whistleblowers, and provide legal recourse in case of extreme harm caused by AI systems. Wiener acknowledged the critical feedback from the open source community and highlighted amendments made to address concerns.
Andrew Ng, a prominent figure in the tech industry, delivered the keynote speech at the event, advocating for the continued innovation enabled by open source models. Ng emphasized the importance of investing in software development rather than legal battles.