Transparency isn’t enough to democratize the technology.

At the turn of the century, when the modern web was just emerging and Microsoft was king, a small but growing technology movement posed an existential threat to the company. Steve Ballmer, Microsoft’s CEO at the time, called one of its core elements “​a cancer that attaches itself” to “everything it touches.” The disease was a competing operating system, Linux, and the open-source software it represented: programs that were free for anyone to download, modify, and use, in contrast to expensive, proprietary software such as Microsoft Windows and Office.

Open-source software did eventually attach itself to much of the internet—Mozilla Firefox, the Android operating system, and Wikipedia are all “open” projects—but the tech industry managed to turn the egalitarian philosophy into a business opportunity. Trillion-dollar companies use free open-source software to build or enhance their own products. And open-source anything is still frequently designed for, and depends on, the Big Tech platforms, gadgets, and data servers that mediate most internet access—in turn attracting users to the world’s most powerful firms. Just running an application or hosting a website almost certainly requires purchasing computing hours from a cloud server operated by the likes of Microsoft, Google, or Amazon.

Now the nascent generative-AI industry is facing a similar issue. More and more people are using AI products offered by major companies, and very few have any insight into or say over how the technology works. In response, a growing number of researchers and organizations are throwing their support behind open AI (not to be confused with OpenAI, the secretive company behind ChatGPT). The idea is to create relatively transparent models that the public can more easily and cheaply use, study, and reproduce, attempting to democratize a highly concentrated technology that may have the potential to transform work, politics, leisure, and even religion. But this movement, like the open-source revolution before it, faces the risk of being subsumed by Big Tech.

There is no better illustration of the tension than Llama 2, the most prominent and controversial AI system professing “openness”—which was created by Meta, the titanic owner of Facebook, Instagram, WhatsApp, and Threads. Released last summer, Llama 2 is a large language model that, although less powerful than those underlying ChatGPT and Google’s Bard, is free for both research and commercial uses. But although the model’s final code is available to download, Meta forbids certain uses of that code. Developers cannot leverage Llama 2 to improve any other language model, and they need Meta’s express permission to integrate Llama 2 into products with more than 700 million monthly users—a policy that would bar TikTok from freely using the technology, for example. And much of Llama’s development pipeline is secret—in particular, nobody outside of Meta knows what data the model was trained on. Independent programmers and advocates have said that it does not qualify as open.

Read: Tech companies’ friendly new strategy to destroy one another

Still, start-ups, universities, and nonprofits can download and use Llama 2 for basically any purpose. In addition to incorporating the model into products, they can to some extent investigate the sources of Llama 2’s capabilities and limitations—much harder tasks with “closed” technology such as ChatGPT and Bard. In a written statement, a Meta spokesperson told me that the company’s approach to openness “allows the AI community to use Llama 2 to help advance” AI in a safe and responsible way.

Usage restrictions aside, for a generative-AI model to be truly open requires releasing more than just the final program. Training data, the code used to process it, the steps taken to fine-tune the algorithm, and so on are key to understanding, replicating, or modifying AI. Older forms of open software could be packaged in a simple .zip file and freely distributed; AI is not so easily contained or accessed. “One could argue many of the projects we currently talk about being ‘open’ AI are not open-source at all,” Udbhav Tiwari, the head of global product policy at Mozilla, told me. Some critics deem such nominally accessible releases examples of “open washing,” wherein companies accrue reputation and free research without actually providing the information needed for somebody to deeply study, re-create, or compete with their models; global efforts are under way to redefine “open-source” for AI.

There are more substantially open models, usually released by nonprofits and small start-ups, which provide greater details about training and have fewer usage restrictions. But even these models run up against the tremendous complexity and resource requirements of generative AI. If classic open-source programs were akin to bicycles in being easy to understand and fix, AI is more like a Tesla. Given the engineering plans for such an advanced car, very few people could repair one on their own, let alone manufacture it. Similarly, when you ask a question to ChatGPT or Bard, the response on your screen is the end product of hundreds of millions of dollars in computing power, not to mention spending on acquiring computer chips, salaries, and more. Almost nobody other than the tech titans and start-ups partnered with them, such as OpenAI, can afford those sums.

Running those models for a large number of users is similarly expensive. Universities, nonprofits, and start-ups “cannot create these kinds of models on their own,” Nur Ahmed, who studies the AI industry at the MIT Sloan School of Management, told me. Already, the pool of AI venture capital is showing signs of drying up as investors fear that start-ups won’t have the resources to compete with the most powerful tech companies.

Read: The future of AI is GOMA

“You’re open-sourcing the code, the weights, or the data, in some combination. But never the compute, never the infrastructure,” Mohamed Abdalla, who studied Big Tech’s influence on AI as a computer scientist at the University of Toronto, told me. Large companies do not provide the computing power or human talent needed to become even a small-time competitor or substantially sway the direction of AI development. Tremendous resources are also needed to audit even “open” models—it took almost two years to identify images of child sex abuse in the largest open-source data set of images used to train generative AI. “There’s a really big difference between saying that open-source is going to democratize access to AI, and open-source is going to democratize the industry,” Sarah Myers West, the managing director of the AI Now Institute, told me.

A handful of efforts are attempting to shift AI infrastructure away from dominant tech companies and toward the public. The federal government has plans to build a National AI Research Resource; several universities have partnered to create a high-performance computing center in Boston for advanced AI research. Yannis Paschalidis, a computer scientist at Boston University, which contributes to that computing center, told me that, for now, “I don’t think I can train the next generation of ChatGPT with trillions of parameters, but I can fine-tune a model or train a smaller, specialized model.”

Researchers are also designing smaller, open models that are sufficiently powerful for many commercial uses, and cheaper to train and run. For instance, EleutherAI, a nonprofit research lab that releases open-source AI, began with a group of researchers trying to make an open alternative to OpenAI’s closed GPT-3. “We wanted to train models like this, we wanted to learn how they work, and we wanted to make smaller scale versions of them publicly accessible,” Stella Biderman, the executive director of EleutherAI, told me. Nonetheless, many programmers, start-ups, nonprofits, and universities can’t create even smaller models without substantial grant money, or can only tinker with models provided by wealthier companies.

Even resources that ostensibly help the open-source community can be beneficial for the tech giants: Google and Meta, for instance, created and help maintain widely used, free software libraries for machine learning. On an earnings call last spring, Meta CEO Mark Zuckerberg said that it has “been very valuable for us to provide that, because now all of the best developers across the industry are using tools that we’re also using internally.” When AI projects are built with Meta’s tools, they are easy to commercialize and can draw users into the Meta-product ecosystem. (Asked about the profit motive behind open-AI libraries, a Meta spokesperson told me, “We believe in approaches that can benefit Meta directly but also help spur a healthy and vibrant AI ecosystem.”) Championing some form of “open” AI development, as many tech executives have, could also be a strategy to combat unwanted regulation; why restrict open-source projects that theoretically represent more competition in the marketplace? Resource constraints, of course, mean those projects are unlikely to seriously threaten leading AI firms.

Meanwhile, Silicon Valley’s ability to attract talent and produce the largest, best-performing AI products means that research and attention bend toward the programs, software architectures, and tasks those companies find most valuable. This, in turn, ends up “shaping the research direction of AI,” Ahmed said.

Read: Why won’t OpenAI say what the Q* algorithm is?

Right now the tech industry values and profits from scale: larger models running on corporate data servers in pursuit of fractional improvements on select benchmarks. An analysis of influential AI papers in recent years found that studies prioritized performance and novelty, whereas values such as “respect for persons” and “justice” were almost nonexistent. These technical papers set the direction for AI programs used in many products and services. “The downstream impact could be someone being denied a job or someone being denied a housing opportunity,” Abeba Birhane, an AI researcher at Mozilla who co-authored that study, told me.

The resources needed to build generative AI have allowed the tech industry to warp what the public expects from the technology: If ChatGPT is the only way you can imagine language models working, anything that doesn’t work like ChatGPT is inadequate. But that would also be a very narrow way to build and use generative AI. Few people buy a car based solely on its horsepower; most consider size, design, mileage, infotainment system, safety, and more. People might also be willing to sacrifice performance for a more fair and transparent chatbot—benefiting from open AI will require not just redefining open-source, but reimagining what AI itself can and should look like.

QOSHE - There Was Never Such a Thing as ‘Open’ AI - Matteo Wong
menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

There Was Never Such a Thing as ‘Open’ AI

9 8
05.01.2024

Transparency isn’t enough to democratize the technology.

At the turn of the century, when the modern web was just emerging and Microsoft was king, a small but growing technology movement posed an existential threat to the company. Steve Ballmer, Microsoft’s CEO at the time, called one of its core elements “​a cancer that attaches itself” to “everything it touches.” The disease was a competing operating system, Linux, and the open-source software it represented: programs that were free for anyone to download, modify, and use, in contrast to expensive, proprietary software such as Microsoft Windows and Office.

Open-source software did eventually attach itself to much of the internet—Mozilla Firefox, the Android operating system, and Wikipedia are all “open” projects—but the tech industry managed to turn the egalitarian philosophy into a business opportunity. Trillion-dollar companies use free open-source software to build or enhance their own products. And open-source anything is still frequently designed for, and depends on, the Big Tech platforms, gadgets, and data servers that mediate most internet access—in turn attracting users to the world’s most powerful firms. Just running an application or hosting a website almost certainly requires purchasing computing hours from a cloud server operated by the likes of Microsoft, Google, or Amazon.

Now the nascent generative-AI industry is facing a similar issue. More and more people are using AI products offered by major companies, and very few have any insight into or say over how the technology works. In response, a growing number of researchers and organizations are throwing their support behind open AI (not to be confused with OpenAI, the secretive company behind ChatGPT). The idea is to create relatively transparent models that the public can more easily and cheaply use, study, and reproduce, attempting to democratize a highly concentrated technology that may have the potential to transform work, politics, leisure, and even religion. But this movement, like the open-source revolution before it, faces the risk of being subsumed by Big Tech.

There is no better illustration of the tension than Llama 2, the most prominent and controversial AI system professing “openness”—which was created by Meta, the titanic owner of Facebook, Instagram, WhatsApp, and Threads. Released last summer, Llama 2 is a large language model that, although less powerful than those underlying ChatGPT and Google’s Bard, is free for both research and commercial uses. But although the model’s final code is available to download, Meta forbids certain uses of that code. Developers cannot leverage Llama 2 to improve any other........

© The Atlantic


Get it on Google Play