OpenAI
OpenAI is an American artificial intelligence (AI) organization consisting of the non-profit OpenAI, Inc.[4] registered in Delaware and its for-profit subsidiary corporation OpenAI Global, LLC.[5] OpenAI researches artificial intelligence with the declared intention of developing "safe and beneficial" artificial general intelligence, which it defines as "highly autonomous systems that outperform humans at most economically valuable work".[6]
Industry | Artificial intelligence |
---|---|
Founded | December 10, 2015 |
Headquarters | San Francisco, California, U.S.[1] |
Key people | |
Products | |
Revenue | US$28 million[2] (2022) |
−US$540 million[2] (2022) | |
Number of employees | c. 500 (2023)[3] |
Website | openai |
Part of a series on |
Machine learning and data mining |
---|
OpenAI was founded in 2015 by Ilya Sutskever, Greg Brockman, Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, Jessica Livingston, John Schulman, Pamela Vagata, and Wojciech Zaremba, with Sam Altman and Elon Musk serving as the initial board members.[7][8][9] Microsoft provided OpenAI Global LLC with a $1 billion investment in 2019 and a $10 billion investment in 2023.[10][11]
History
2015–2018: Non-profit beginnings
In December 2015, Sam Altman, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Elon Musk, Amazon Web Services (AWS), Infosys, and YC Research announced[12] the formation of OpenAI and pledged over $1 billion to the venture. The actually collected total amount of contributions was only 130 million until 2019.[5] The organization stated it would "freely collaborate" with other institutions and researchers by making its patents and research open to the public.[13][14] OpenAI is headquartered at the Pioneer Building in Mission District, San Francisco.[15][16]
According to Wired, Brockman met with Yoshua Bengio, one of the "founding fathers" of deep learning, and drew up a list of the "best researchers in the field".[17] Brockman was able to hire nine of them as the first employees in December 2015.[17] In 2016, OpenAI paid corporate-level (rather than nonprofit-level) salaries, but did not pay AI researchers salaries comparable to those of Facebook or Google.[17]
Microsoft's Peter Lee stated that the cost of a top AI researcher exceeds the cost of a top NFL quarterback prospect.[17] OpenAI's potential and mission drew these researchers to the firm; a Google employee said he was willing to leave Google for OpenAI "partly because of the very strong group of people and, to a very large extent, because of its mission."[17] Brockman stated that "the best thing that I could imagine doing was moving humanity closer to building real AI in a safe way."[17] OpenAI co-founder Wojciech Zaremba stated that he turned down "borderline crazy" offers of two to three times his market value to join OpenAI instead.[17]
In April 2016, OpenAI released a public beta of "OpenAI Gym", its platform for reinforcement learning research.[18] Nvidia gifted its first DGX-1 supercomputer to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours.[19][20] In December 2016, OpenAI released "Universe", a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites, and other applications.[21][22][23][24]
In 2017 OpenAI spent $7.9 million, or a quarter of its functional expenses, on cloud computing alone.[25] In comparison, DeepMind's total expenses in 2017 were $442 million. In the summer of 2018, simply training OpenAI's Dota 2 bots required renting 128,000 CPUs and 256 GPUs from Google for multiple weeks.
In 2018, Musk resigned from his board seat, citing "a potential future conflict [of interest]" with his role as CEO of Tesla due to Tesla's AI development for self-driving cars.[26] Sam Altman claims that Musk believed OpenAI had fallen behind other players like Google and Musk proposed instead to take over OpenAI himself, which the board rejected. Musk subsequently left OpenAI but claimed to remain a donor, yet made no donations after his departure.[27]
In February 2019, GPT-2 was announced, which got a lot of attention for its ability to generate human-like text.[28]
2019: Transition from non-profit
In 2019, OpenAI transitioned from non-profit to "capped" for-profit, with the profit capped at 100 times any investment.[29] According to OpenAI, the capped-profit model allows OpenAI Global LLC to legally attract investment from venture funds, and in addition, to grant employees stakes in the company, the goal being that they can say "I'm going to OpenAI, but in the long term it's not going to be disadvantageous to us as a family."[30] Many top researchers work for Google Brain, DeepMind, or Facebook, which offer stock options that a nonprofit would be unable to.[31] Before the transition, public disclosure of the compensation of top employees at OpenAI was legally required.[32]
The company then distributed equity to its employees and partnered with Microsoft,[33] announcing an investment package of $1 billion into the company. Since then, OpenAI systems have run on an Azure-based supercomputing platform from Microsoft.[34][35][36]
OpenAI Global LLC subsequently announced its intention to commercially license its technologies.[37] OpenAI plans to spend the $1 billion "within five years, and possibly much faster."[38] Altman has stated that even a billion dollars may turn out to be insufficient, and that the lab may ultimately need "more capital than any non-profit has ever raised" to achieve artificial general intelligence.[39]
The transition from a nonprofit to a capped-profit company was viewed with skepticism by Oren Etzioni of the nonprofit Allen Institute for AI, who agreed that wooing top researchers to a nonprofit is difficult, but stated "I disagree with the notion that a nonprofit can't compete" and pointed to successful low-budget projects by OpenAI and others. "If bigger and better funded was always better, then IBM would still be number one."
The nonprofit, OpenAI, Inc., is the sole controlling shareholder of OpenAI Global LLC, which, despite being a for-profit company, retains a formal fiduciary responsibility to OpenAI, Inc.'s nonprofit charter. A majority of OpenAI, Inc.'s board is barred from having financial stakes in OpenAI Global LLC.[30] In addition, minority members with a stake in OpenAI Global LLC are barred from certain votes due to conflict of interest.[31] Some researchers have argued that OpenAI Global LLC's switch to for-profit status is inconsistent with OpenAI's claims to be "democratizing" AI.[40]
2020–present: ChatGPT, DALL-E, and partnership with Microsoft
In 2020, OpenAI announced GPT-3, a language model trained on large internet datasets. GPT-3 is aimed at natural language answering questions, but it can also translate between languages and coherently generate improvised text. It also announced that an associated API, named simply "the API", would form the heart of its first commercial product.[41]
In 2021, OpenAI introduced DALL-E, a deep-learning model that can generate digital images from natural language descriptions.[42]
In December 2022, OpenAI received widespread media coverage after launching a free preview of ChatGPT, its new AI chatbot based on GPT-3.5. According to OpenAI, the preview received over a million signups within the first five days.[43] According to anonymous sources cited by Reuters in December 2022, OpenAI Global LLC was projecting $200 million of revenue in 2023 and $1 billion in revenue in 2024.[44]
As of January 2023, OpenAI Global LLC was in talks for funding that would value the company at $29 billion, double the value of the company in 2021.[45] On January 23, 2023, Microsoft announced a new multi-year US$10 billion investment in OpenAI Global LLC.[46][47] Rumors of this deal suggested Microsoft may receive 75% of OpenAI's profits until it secures its investment return and a 49% stake in the company.[48]
The investment is believed to be a part of Microsoft's efforts to integrate OpenAI's ChatGPT into the Bing search engine. Google announced a similar AI application (Bard), after ChatGPT was launched, fearing that ChatGPT could threaten Google's place as a go-to source for information.[49][50]
On February 7, 2023, Microsoft announced that it is building AI technology based on the same foundation as ChatGPT into Microsoft Bing, Edge, Microsoft 365 and other products.[51]
On March 3, 2023, Reid Hoffman resigned from his board seat, citing a desire to avoid conflicts of interest between his board seat at OpenAI and his investments in AI technology companies via Greylock Partners, as well as his role as the co-founder of the AI technology startup Inflection AI. Hoffman remained on the board of Microsoft, a major investor in OpenAI.[52]
On March 14, 2023, OpenAI released GPT-4, both as an API (with a waitlist) and as a feature of ChatGPT Plus.[53]
On May 22, 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of superintelligence.[54] They consider that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They propose creating an international watchdog organization similar to IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems on the other side should not be overregulated. They also call for more technical safety research for superintelligences, and ask for more coordination, for example through governments launching a joint project which "many current efforts become part of".[54][55]
In August 2023, it was announced that OpenAI had acquired the New York-based start-up, Global Illumination - a company that deploys AI to develop digital infrastructure and creative tools.[56]
Participants
Key employees:
- CEO and co-founder:[57] Sam Altman, former president of the startup accelerator Y Combinator
- President and co-founder:[58] Greg Brockman, former CTO, 3rd employee of Stripe[59]
- Chief Scientist and co-founder: Ilya Sutskever, a former Google expert on machine learning[59]
- Chief Technology Officer:[58] Mira Murati, previously at Leap Motion and Tesla, Inc.
- Chief Operating Officer:[58] Brad Lightcap, previously at Y Combinator and JPMorgan Chase
Board of the OpenAI nonprofit:[60]
- Greg Brockman
- Ilya Sutskever
- Sam Altman
- Adam D'Angelo
- Tasha McCauley
- Helen Toner
Individual investors:[59]
- Reid Hoffman, LinkedIn co-founder[61]
- Peter Thiel, PayPal co-founder[61]
- Jessica Livingston, a founding partner of Y Combinator
Corporate investors:
Motives
Some scientists, such as Stephen Hawking and Stuart Russell, have articulated concerns that if advanced AI someday gains the ability to re-design itself at an ever-increasing rate, an unstoppable "intelligence explosion" could lead to human extinction. Co-founder Musk characterizes AI as humanity's "biggest existential threat".[65]
Musk and Altman have stated they are partly motivated by concerns about AI safety and the existential risk from artificial general intelligence.[66][67] OpenAI states that "it's hard to fathom how much human-level AI could benefit society," and that it is equally difficult to comprehend "how much it could damage society if built or used incorrectly".[14] Research on safety cannot safely be postponed: "because of AI's surprising history, it's hard to predict when human-level AI might come within reach."[68] OpenAI states that AI "should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible."[14] Co-chair Sam Altman expects the decades-long project to surpass human intelligence.[69]
Vishal Sikka, the former CEO of Infosys, stated that an "openness" where the endeavor would "produce results generally in the greater interest of humanity" was a fundamental requirement for his support, and that OpenAI "aligns very nicely with our long-held values" and their "endeavor to do purposeful work".[70] Cade Metz of Wired suggests that corporations such as Amazon may be motivated by a desire to use open-source software and data to level the playing field against corporations such as Google and Facebook which own enormous supplies of proprietary data. Altman states that Y Combinator companies will share their data with OpenAI.[69]
Strategy
Musk posed the question: "What is the best thing we can do to ensure the future is good? We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing AI in a way that is safe and is beneficial to humanity." Musk acknowledged that "there is always some risk that in actually trying to advance (friendly) AI we may create the thing we are concerned about"; nonetheless, the best defense is "to empower as many people as possible to have AI. If everyone has AI powers, then there's not any one person or a small set of individuals who can have AI superpower."[59]
Musk and Altman's counter-intuitive strategy of trying to reduce the risk that AI will cause overall harm, by giving AI to everyone, is controversial among those who are concerned with existential risk from artificial intelligence. Philosopher Nick Bostrom is skeptical of Musk's approach: "If you have a button that could do bad things to the world, you don't want to give it to everyone."[67] During a 2016 conversation about technological singularity, Altman said that "we don't plan to release all of our source code" and mentioned a plan to "allow wide swaths of the world to elect representatives to a new governance board". Greg Brockman stated "Our goal right now... is to do the best thing there is to do. It's a little vague."[71]
Conversely, OpenAI's initial decision to withhold GPT-2 due to a wish to "err on the side of caution" in the presence of potential misuse has been criticized by advocates of openness. Delip Rao, an expert in text generation, stated "I don't think [OpenAI] spent enough time proving [GPT-2] was actually dangerous." Other critics argued that open publication is necessary to replicate the research and to be able to come up with countermeasures.[72]
More recently, in 2022, OpenAI published its approach to the alignment problem. They expect that aligning AGI to human values is likely harder than aligning current AI systems: "Unaligned AGI could pose substantial risks to humanity and solving the AGI alignment problem could be so difficult that it will require all of humanity to work together". They explore how to better use human feedback to train AI systems. They also consider using AI to incrementally automate alignment research.[73]
OpenAI claims that it's developed a way to use GPT-4, its flagship generative AI model, for content moderation[74] — lightening the burden on human teams.
Products and applications
As of 2021, OpenAI's research focuses on reinforcement learning (RL).[75] OpenAI is viewed as an important competitor to DeepMind.[76]
Gym
Announced in 2016, Gym aims to provide an easily implemented general-intelligence benchmark over a wide variety of environments—akin to, but broader than, the ImageNet Large Scale Visual Recognition Challenge used in supervised learning research. It hopes to standardize the way in which environments are defined in AI research publications, so that published research becomes more easily reproducible.[18][77] The project claims to provide the user with a simple interface. As of June 2017, Gym can only be used with Python.[78] As of September 2017, the Gym documentation site was not maintained, and active work focused instead on its GitHub page.[79]
RoboSumo
Released in 2017, RoboSumo is a virtual world where humanoid metalearning robot agents initially lack knowledge of how to even walk, but are given the goals of learning to move and pushing the opposing agent out of the ring.[80] Through this adversarial learning process, the agents learn how to adapt to changing conditions; when an agent is then removed from this virtual environment and placed in a new virtual environment with high winds, the agent braces to remain upright, suggesting it had learned how to balance in a generalized way.[80][81] OpenAI's Igor Mordatch argues that competition between agents can create an intelligence "arms race" that can increase an agent's ability to function, even outside the context of the competition.[80]
OpenAI Five
OpenAI Five is the name of a team of five OpenAI-curated bots that are used in the competitive five-on-five video game Dota 2, who learn to play against human players at a high skill level entirely through trial-and-error algorithms. Before becoming a team of five, the first public demonstration occurred at The International 2017, the annual premiere championship tournament for the game, where Dendi, a professional Ukrainian player, lost against a bot in a live one-on-one matchup.[82][83] After the match, CTO Greg Brockman explained that the bot had learned by playing against itself for two weeks of real time, and that the learning software was a step in the direction of creating software that can handle complex tasks like a surgeon.[84][85] The system uses a form of reinforcement learning, as the bots learn over time by playing against themselves hundreds of times a day for months, and are rewarded for actions such as killing an enemy and taking map objectives.[86][87][88]
By June 2018, the ability of the bots expanded to play together as a full team of five, and they were able to defeat teams of amateur and semi-professional players.[89][86][90][91] At The International 2018, OpenAI Five played in two exhibition matches against professional players, but ended up losing both games.[92][93][94] In April 2019, OpenAI Five defeated OG, the reigning world champions of the game at the time, 2:0 in a live exhibition match in San Francisco.[95][96] The bots' final public appearance came later that month, where they played in 42,729 total games in a four-day open online competition, winning 99.4% of those games.[97]
OpenAI Five's mechanisms in Dota 2's bot player shows the challenges of AI systems in multiplayer online battle arena (MOBA) games and how OpenAI Five has demonstrated the use of deep reinforcement learning (DRL) agents to achieve superhuman competence in Dota 2 matches.[98]
GYM Retro
Released in 2018, Gym Retro is a platform for RL research on video games.[99] Gym Retro is used to research RL algorithms and study generalization. Prior research in RL has focused chiefly on optimizing agents to solve single tasks. Gym Retro gives the ability to generalize between games with similar concepts but different appearances.
Debate Game
In 2018, OpenAI launched the Debate Game, which teaches machines to debate toy problems in front of a human judge. The purpose is to research whether such an approach may assist in auditing AI decisions and in developing explainable AI.[100][101]
Dactyl
Developed in 2018, Dactyl uses machine learning to train a Shadow Hand, a human-like robot hand, to manipulate physical objects.[102] It learns entirely in simulation using the same RL algorithms and training code as OpenAI Five. OpenAI tackled the object orientation problem by using domain randomization, a simulation approach which exposes the learner to a variety of experiences rather than trying to fit to reality. The set-up for Dactyl, aside from having motion tracking cameras, also has RGB cameras to allow the robot to manipulate an arbitrary object by seeing it. In 2018, OpenAI showed that the system was able to manipulate a cube and an octagonal prism.[103]
In 2019, OpenAI demonstrated that Dactyl could solve a Rubik's Cube. The robot was able to solve the puzzle 60% of the time. Objects like the Rubik's Cube introduce complex physics that is harder to model. OpenAI solved this by improving the robustness of Dactyl to perturbations; they employed a technique called Automatic Domain Randomization (ADR), a simulation approach where progressively more difficult environments are endlessly generated. ADR differs from manual domain randomization by not needing a human to specify randomization ranges.[104]
API
In June 2020, OpenAI announced a multi-purpose API which it said was "for accessing new AI models developed by OpenAI" to let developers call on it for "any English language AI task".[105][106]
OpenAI's original GPT model ("GPT-1")
The original paper on generative pre-training of a transformer-based language model was written by Alec Radford and his colleagues, and published in preprint on OpenAI's website on June 11, 2018.[108] It showed how a generative model of language is able to acquire world knowledge and process long-range dependencies by pre-training on a diverse corpus with long stretches of contiguous text.
GPT-2
Generative Pre-trained Transformer 2 ("GPT-2") is an unsupervised transformer language model and the successor to OpenAI's original GPT model ("GPT-1"). GPT-2 was first announced in February 2019, with only limited demonstrative versions initially released to the public. The full version of GPT-2 was not immediately released out of concern over potential misuse, including applications for writing fake news.[109] Some experts expressed skepticism that GPT-2 posed a significant threat.
The Allen Institute for Artificial Intelligence responded to GPT-2 with a tool to detect "neural fake news".[110] Other researchers, such as Jeremy Howard, warned of "the technology to totally fill Twitter, email, and the web up with reasonable-sounding, context-appropriate prose, which would drown out all other speech and be impossible to filter".[111] In November 2019, OpenAI released the complete version of the GPT-2 language model.[112] Several websites host interactive demonstrations of different instances of GPT-2 and other transformer models.[113][114][115]
GPT-2's authors argue unsupervised language models to be general-purpose learners, illustrated by GPT-2 achieving state-of-the-art accuracy and perplexity on 7 of 8 zero-shot tasks (i.e. the model was not further trained on any task-specific input-output examples).
The corpus it was trained on, called WebText, contains slightly over 8 million documents for a total of 40 gigabytes of text from URLs shared in Reddit submissions with at least 3 upvotes. It avoids certain issues encoding vocabulary with word tokens by using byte pair encoding. This permits representing any string of characters by encoding both individual characters and multiple-character tokens.[116]
GPT-3
First described in May 2020, Generative Pre-trained[lower-alpha 1] Transformer 3 (GPT-3) is an unsupervised transformer language model and the successor to GPT-2.[118][119][120] OpenAI stated that full version of GPT-3 contains 175 billion parameters,[120] two orders of magnitude larger than the 1.5 billion parameters[121] in the full version of GPT-2 (although GPT-3 models with as few as 125 million parameters were also trained).[122]
OpenAI stated that GPT-3 succeeds at certain "meta-learning" tasks. It can generalize the purpose of a single input-output pair. The paper gives an example of translation and cross-linguistic transfer learning between English and Romanian, and between English and German.[120]
GPT-3 dramatically improved benchmark results over GPT-2. OpenAI cautioned that such scaling up of language models could be approaching or encountering the fundamental capability limitations of predictive language models.[123] Pre-training GPT-3 required several thousand petaflop/s-days[lower-alpha 2] of compute, compared to tens of petaflop/s-days for the full GPT-2 model.[120] Like that of its predecessor,[109] GPT-3's fully trained model was not immediately released to the public on the grounds of possible abuse, though OpenAI planned to allow access through a paid cloud API after a two-month free private beta that began in June 2020.[105][125]
On September 23, 2020, GPT-3 was licensed exclusively to Microsoft.[126][127]
Codex
Announced in mid-2021, Codex is a descendant of GPT-3 that has additionally been trained on code from 54 million GitHub repositories,[128][129] and is the AI powering the code autocompletion tool GitHub Copilot.[129] In August 2021, an API was released in private beta.[130] According to OpenAI, the model is able to create working code in over a dozen programming languages, most effectively in Python.[128]
Several issues with glitches, design flaws, and security vulnerabilities have been brought up.[131][132]
GitHub Copilot has been accused of emitting copyrighted code, with no author attribution or license.[133]
OpenAI announced that they are going to discontinue support for Codex API starting from March 23, 2023.[134]
Whisper
Released in 2022, Whisper is a general-purpose speech recognition model.[135] It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification.[136]
GPT-4
On March 14, 2023, OpenAI announced the release of Generative Pre-trained Transformer 4 (GPT-4), capable of accepting text or image inputs.[137] OpenAI announced the updated technology passed a simulated law school bar exam with a score around the top 10% of test takers; by contrast, the prior version, GPT-3.5, scored around the bottom 10%. GPT-4 can also read, analyze or generate up to 25,000 words of text, and write code in all major programming languages.[138]
MuseNet and Jukebox (music)
Released in 2019, MuseNet is a deep neural net trained to predict subsequent musical notes in MIDI music files. It can generate songs with ten different instruments in fifteen different styles. According to The Verge, a song generated by MuseNet tends to start reasonably but then fall into chaos the longer it plays.[139][140] In pop culture, initial applications of this tool were utilized as early as 2020 for the internet psychological thriller Ben Drowned to create music for the titular character.[141][142]
Released in 2020, Jukebox is an open-sourced algorithm to generate music with vocals. After training on 1.2 million samples, the system accepts a genre, artist, and a snippet of lyrics and outputs song samples. OpenAI stated the songs "show local musical coherence [and] follow traditional chord patterns" but acknowledged that the songs lack "familiar larger musical structures such as choruses that repeat" and that "there is a significant gap" between Jukebox and human-generated music. The Verge stated "It's technologically impressive, even if the results sound like mushy versions of songs that might feel familiar", while Business Insider stated "surprisingly, some of the resulting songs are catchy and sound legitimate".[143][144][145]
Microscope
Released in 2020, Microscope[146] is a collection of visualizations of every significant layer and neuron of eight different neural network models which are often studied in interpretability.[147] Microscope was created to analyze the features that form inside these neural networks easily. The models included are AlexNet, VGG 19, different versions of Inception, and different versions of CLIP Resnet.[148]
DALL-E and CLIP (images)
Revealed in 2021, DALL-E is a Transformer model that creates images from textual descriptions.[149]
Also revealed in 2021, CLIP does the opposite: it creates a description for a given image.[150] DALL-E uses a 12-billion-parameter version of GPT-3 to interpret natural language inputs (such as "a green leather purse shaped like a pentagon" or "an isometric view of a sad capybara") and generate corresponding images. It can create images of realistic objects ("a stained-glass window with an image of a blue strawberry") as well as objects that do not exist in reality ("a cube with the texture of a porcupine"). As of March 2021, no API or code is available.
DALL-E 2
In April 2022, OpenAI announced DALL-E 2, an updated version of the model with more realistic results.[151] In December 2022, OpenAI published on GitHub software for Point-E, a new rudimentary system for converting a text description into a 3-dimensional model.[152]
ChatGPT
Launched in November 2022, ChatGPT is an artificial intelligence tool built on top of GPT-3 that provides a conversational interface that allows users to ask questions in natural language. The system then responds with an answer within seconds. ChatGPT reached 1 million users 5 days after its launch.[153]
ChatGPT Plus is a $20/month subscription service that allows users to access ChatGPT during peak hours, provides faster response times, selection of either the GPT-3.5 or GPT-4 model, and gives users early access to new features.[154]
In May 2023, OpenAI launched a user interface for ChatGPT for the App Store and later in July 2023 for the Play store.[155] The app supports chat history syncing and voice input (using Whisper, OpenAI's speech recognition model).[156][155][157]
Controversies
OpenAI has been criticized for outsourcing the annotation of data sets including toxic content to Sama, a company based in San Francisco but employing workers in Kenya. These annotations were used to train an AI model to detect toxicity, which could then be used to filter out toxic content, notably from ChatGPT's training data and outputs. But these pieces of text usually contained detailed descriptions of various types of violence, including sexual violence. The four Sama employees interviewed by Time described themselves as mentally scarred. OpenAI paid Sama $12.50 per hour of work, and Sama was redistributing the equivalent of between $1.32 and $2.00 per hour post-tax to its annotators. Sama's spokesperson said that the $12.50 was also covering other implicit costs, among which were infrastructure expenses, quality assurance and management.[158]
The company was also criticized for disclosing particularly few technical details about products like GPT-4, which goes against its initial commitment for openness and makes it harder for independent researchers to replicate its work and to develop safeguards. OpenAI justified this strategic turn by competitiveness and safety reasons. OpenAI's chief scientist Ilya Sutskever argued in 2023 that open-sourcing increasingly capable models is increasingly risky, expecting that the safety reasons for not open-sourcing the most potent AI models will be "obvious" in a few years.[159]
OpenAI has been sued for copyright infringement by authors Sarah Silverman, Matthew Butterick, Paul Tremblay and Mona Awad.[160][161] The New York Times has also envisaged a lawsuit.[161] In September 2023, 17 authors, including George R. R. Martin, John Grisham, Jodi Picoult and Jonathan Franzen, joined the Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work.[162][163]
OpenAI has been sued for violating EU General Data Protection Regulations.[164][165] In April 2023, the EU formed the European Data Protection Board (EDPB) to improve regulatory oversight.[164]
See also
Notes
References
- "I Tried To Visit OpenAI's Office. Hilarity Ensued". The San Francisco Standard. December 20, 2022. Retrieved June 3, 2023.
- Woo, Erin; Efrati, Amir (May 4, 2023). "OpenAI's Losses Doubled to $540 Million as It Developed ChatGPT". The Information.
In 2022, by comparison, revenue was just $28 million, mainly from selling access to its AI software... OpenAI's losses roughly doubled to around $540 million last year as it developed ChatGPT...
- "Mira Murati, the young CTO of OpenAI, is building ChatGPT and shaping your future | Fortune". October 5, 2023.
- "OPENAI, INC". OpenCorporates. December 8, 2015. Retrieved August 2, 2023.
- Our Structure. Open AI, 23 June 2023. Retrieved 15 October 2023.
- "OpenAI Charter". openai.com. April 9, 2018. Retrieved July 11, 2023.
- "Sam Altman on His Plan to Keep A.I. Out of the Hands of the "Bad Guys"". Vanity Fair. 2015. Archived from the original on February 3, 2023. Retrieved January 23, 2023.
- "Introducing OpenAI". OpenAI. December 12, 2015. Archived from the original on August 8, 2017. Retrieved January 27, 2023.
- "OpenAI, the company behind ChatGPT: What all it does, how it started and more". The Times of India. January 25, 2023. Archived from the original on February 3, 2023. Retrieved January 28, 2023.
- Browne, Ryan (January 10, 2023). "Microsoft reportedly plans to invest $10 billion in creator of buzzy A.I. tool ChatGPT". CNBC. Archived from the original on February 3, 2023. Retrieved January 27, 2023.
- Lardinois, Frederic (March 14, 2023). "Microsoft's new Bing was using GPT-4 all along". TechCrunch. Retrieved March 30, 2023.
- "Introducing OpenAI". OpenAI. December 12, 2015. Archived from the original on August 8, 2017. Retrieved December 23, 2022.
- "Introducing OpenAI". OpenAI Blog. December 12, 2015. Archived from the original on February 24, 2019. Retrieved September 29, 2018.
- "Tech giants pledge $1bn for 'altruistic AI' venture, OpenAI". BBC News. December 12, 2015. Archived from the original on March 14, 2018. Retrieved December 19, 2015.
- Conger, Kate. "Elon Musk's Neuralink Sought to Open an Animal Testing Facility in San Francisco". Gizmodo. Archived from the original on September 24, 2018. Retrieved October 11, 2018.
- Hao, Karen (February 17, 2020). "The messy, secretive reality behind OpenAI's bid to save the world". MIT Technology Review. Archived from the original on April 3, 2020. Retrieved March 9, 2020.
- Cade Metz (April 27, 2016). "Inside OpenAI, Elon Musk's Wild Plan to Set Artificial Intelligence Free". Wired. Archived from the original on April 27, 2016. Retrieved April 28, 2016.
- Dave Gershgorn (April 27, 2016). "Elon Musk's Artificial Intelligence Group Opens A 'Gym' To Train A.I." Popular Science. Archived from the original on April 30, 2016. Retrieved April 29, 2016.
- Carr, Austin; King, Ian (June 15, 2023). "How Nvidia Became ChatGPT's Brain and Joined the $1 Trillion Club". Bloomberg News. Archived from the original on June 18, 2023.
- Vanian, Jonathan (August 15, 2016). "Elon Musk's Artificial Intelligence Project Just Got a Free Supercomputer". Fortune. Archived from the original on June 7, 2023.
- Metz, Cade. "Elon Musk's Lab Wants to Teach Computers to Use Apps Just Like Humans Do". WIRED. Archived from the original on January 13, 2019. Retrieved December 31, 2016.
- Mannes, John. "OpenAI's Universe is the fun parent every artificial intelligence deserves". TechCrunch. Archived from the original on February 19, 2019. Retrieved December 31, 2016.
- "OpenAI – Universe". Archived from the original on January 1, 2017. Retrieved December 31, 2016.
- Claburn, Thomas. "Elon Musk-backed OpenAI reveals Universe – a universal training ground for computers". The Register. Archived from the original on January 1, 2017. Retrieved December 31, 2016.
- "Microsoft to invest $1 billion in OpenAI". Reuters. July 22, 2019. Archived from the original on May 25, 2020. Retrieved March 6, 2020.
- Vincent, James (February 21, 2018). "Elon Musk leaves board of AI safety group to avoid conflict of interest with Tesla". The Verge. Archived from the original on November 9, 2020. Retrieved February 22, 2018.
- "The secret history of Elon Musk, Sam Altman, and OpenAI | Semafor". www.semafor.com. March 24, 2023. Retrieved March 30, 2023.
- Hern, Alex (February 14, 2019). "New AI fake text generator may be too dangerous to release, say creators". The Guardian. Archived from the original on February 14, 2019. Retrieved December 19, 2020.
- "OpenAI shifts from nonprofit to 'capped-profit' to attract capital". March 11, 2019. Archived from the original on January 4, 2023. Retrieved January 4, 2023.
- "To Compete With Google, OpenAI Seeks Investors–and Profits". Wired. December 3, 2019. Archived from the original on March 14, 2020. Retrieved March 6, 2020.
- Kahn, Jeremy (March 11, 2019). "AI Research Group Co-Founded by Elon Musk Starts For-Profit Arm". Bloomberg News. Archived from the original on December 7, 2019. Retrieved March 6, 2020.
- Metz, Cade (April 19, 2018). "A.I. Researchers Are Making More Than $1 Million, Even at a Nonprofit". The New York Times. ISSN 0362-4331. Archived from the original on August 8, 2018. Retrieved January 28, 2023.
- "Microsoft invests in and partners with OpenAI". July 22, 2019.
- Langston, Jennifer (January 11, 2023). "Microsoft announces new supercomputer, lays out vision for future AI work". Source. Archived from the original on February 10, 2023. Retrieved February 10, 2023.
Built in collaboration with and exclusively for OpenAI
- Foley, Mary Jo (May 19, 2020). "Microsoft builds a supercomputer for OpenAI for training massive AI models". ZDNET. Archived from the original on February 10, 2023. Retrieved February 10, 2023.
- "Microsoft's OpenAI supercomputer has 285,000 CPU cores, 10,000 GPUs". Engadget. May 19, 2020. Archived from the original on February 10, 2023. Retrieved February 10, 2023.
Microsoft's OpenAI supercomputer has 285,000 CPU cores, 10,000 GPUs. It's one of the five fastest systems in the world.
- "Microsoft Invests in and Partners with OpenAI to Support Us Building Beneficial AGI". OpenAI. July 22, 2019. Archived from the original on November 7, 2020. Retrieved February 21, 2020.
- Murgia, Madhumita (August 7, 2019). "DeepMind runs up higher losses and debts in race for AI". Financial Times. Archived from the original on December 26, 2019. Retrieved March 6, 2020.
- "OpenAI Will Need More Capital Than Any Non-Profit Has Ever Raised". Fortune. Archived from the original on December 8, 2019. Retrieved March 6, 2020.
- Vincent, James (July 22, 2019). "Microsoft invests $1 billion in OpenAI to pursue holy grail of artificial intelligence". The Verge. Archived from the original on July 23, 2019. Retrieved March 6, 2020.
- Vance, Ashlee (June 11, 2020). "Trillions of Words Analyzed, OpenAI Sets Loose AI Language Colossus". Bloomberg News. Archived from the original on October 13, 2020. Retrieved June 12, 2020.
- "OpenAI debuts DALL-E for generating images from text". VentureBeat. January 5, 2021. Archived from the original on January 5, 2021. Retrieved January 5, 2021.
- Roose, Kevin (December 5, 2022). "The Brilliance and Weirdness of ChatGPT". The New York Times. Archived from the original on January 18, 2023. Retrieved January 5, 2023.
- Dastin, Jeffrey; Hu, Krystal; Dave, Paresh; Dave, Paresh (December 15, 2022). "Exclusive: ChatGPT owner OpenAI projects $1 billion in revenue by 2024". Reuters. Archived from the original on February 3, 2023. Retrieved January 5, 2023.
- Kruppa, Berber Jin and Miles (January 5, 2023). "WSJ News Exclusive | ChatGPT Creator in Investor Talks at $29 Billion Valuation". Wall Street Journal. Archived from the original on February 3, 2023. Retrieved January 6, 2023.
- "Microsoft Adds $10 Billion to Investment in ChatGPT Maker OpenAI". Bloomberg.com. January 23, 2023. Archived from the original on January 23, 2023. Retrieved January 23, 2023.
- Capoot, Ashley (January 23, 2023). "Microsoft announces multibillion-dollar investment in ChatGPT-maker OpenAI". CNBC. Archived from the original on January 23, 2023. Retrieved January 23, 2023.
- Warren, Tom (January 23, 2023). "Microsoft extends OpenAI partnership in a "multibillion dollar investment"". The Verge. Retrieved April 29, 2023.
- "Bard: Google launches ChatGPT rival". BBC News. February 6, 2023. Archived from the original on February 7, 2023. Retrieved February 7, 2023.
- Vincent, James (February 8, 2023). "Google's AI chatbot Bard makes factual error in first demo". The Verge. Archived from the original on February 12, 2023. Retrieved February 12, 2023.
- Dotan, Tom (February 7, 2023). "Microsoft Adds ChatGPT AI Technology to Bing Search Engine". Wall Street Journal. Archived from the original on February 7, 2023. Retrieved February 7, 2023.
- Dastin, Jeffrey (March 3, 2023). "OpenAI's long-time backer Reid Hoffman leaves board". Reuters. Retrieved March 17, 2023.
- "GPT-4". openai.com. Retrieved March 16, 2023.
- "Governance of superintelligence". openai.com. Retrieved May 30, 2023.
- Wodecki, Ben; Yao, Deborah (May 23, 2023). "OpenAI Founders Warn AI 'Superintelligence' is Like Nuclear Power".
- "OpenAI acquires start-up Global Illumination to work on core products, ChatGPT". Reuters. August 16, 2023. Retrieved August 17, 2023.
- Bass, Dina (July 22, 2019). "Microsoft to invest $1 billion in OpenAI". Los Angeles Times. Archived from the original on July 22, 2019. Retrieved July 22, 2019.
- Bordoloi, Pritam (May 9, 2022). "OpenAI gets a new president, CTO & COO in the latest rejig". AIM. Archived from the original on October 16, 2022. Retrieved October 11, 2022.
- "Silicon Valley investors to bankroll artificial-intelligence center". The Seattle Times. December 13, 2015. Archived from the original on January 5, 2016. Retrieved December 19, 2015.
- "Our structure". openai.com. Retrieved July 14, 2023.
- Liedtke, Michael. "Elon Musk, Peter Thiel, Reid Hoffman, others back $1 billion OpenAI research center". Mercury News. Archived from the original on December 22, 2015. Retrieved December 19, 2015.
- Vincent, James (July 22, 2019). "Microsoft invests $1 billion in OpenAI to pursue holy grail of artificial intelligence". The Verge. Archived from the original on July 23, 2019. Retrieved July 23, 2019.
- "About OpenAI". OpenAI. December 11, 2015. Archived from the original on December 22, 2017. Retrieved December 23, 2022.
- "Elon Musk, Infosys, others back OpenAI with $1 bn". Business Standard India. Indo-Asian News Service. December 12, 2015. Archived from the original on August 30, 2019. Retrieved August 30, 2019.
- Piper, Kelsey (November 2, 2018). "Why Elon Musk fears artificial intelligence". Vox. Archived from the original on April 23, 2021. Retrieved March 10, 2021.
- Lewontin, Max (December 14, 2015). "Open AI: Effort to democratize artificial intelligence research?". The Christian Science Monitor. Archived from the original on December 19, 2015. Retrieved December 19, 2015.
- Cade Metz (April 27, 2016). "Inside OpenAI, Elon Musk's Wild Plan to Set Artificial Intelligence Free". Wired. Archived from the original on April 27, 2016. Retrieved April 28, 2016.
- Mendoza, Jessica. "Tech leaders launch nonprofit to save the world from killer robots". The Christian Science Monitor. Archived from the original on July 3, 2018. Retrieved December 19, 2015.
- Metz, Cade (December 15, 2015). "Elon Musk's Billion-Dollar AI Plan Is About Far More Than Saving the World". Wired. Archived from the original on December 19, 2015. Retrieved December 19, 2015.
Altman said they expect this decades-long project to surpass human intelligence.
- Vishal Sikka (December 14, 2015). "OpenAI: AI for All". InfyTalk. Infosys. Archived from the original on December 22, 2015. Retrieved December 22, 2015.
- "Sam Altman's Manifest Destiny". The New Yorker. No. October 10, 2016. Archived from the original on October 4, 2016. Retrieved October 4, 2016.
- Vincent, James (February 21, 2019). "AI researchers debate the ethics of sharing potentially harmful programs". The Verge. Archived from the original on February 9, 2021. Retrieved March 6, 2020.
- "Our approach to alignment research". openai.com. Retrieved April 26, 2023.
- Abdur Rehman, Syed (August 16, 2023). "OpenAI proposes a new way to use GPT-4 for content moderation". techfusionist. Retrieved September 18, 2023.
- Wiggers, Kyle (July 16, 2021). "OpenAI disbands its robotics research team". VentureBeat. Archived from the original on February 12, 2023. Retrieved February 12, 2023.
- Lee, Dave (October 15, 2019). "Robot solves Rubik's cube, but not grand challenge". BBC News. Archived from the original on April 3, 2020. Retrieved February 29, 2020.
- Greg Brockman; John Schulman (April 27, 2016). "OpenAI Gym Beta". OpenAI Blog. OpenAI. Archived from the original on February 26, 2019. Retrieved April 29, 2016.
- "OpenAI Gym". GitHub. Archived from the original on January 28, 2019. Retrieved May 8, 2017.
- Brockman, Greg (September 12, 2017). "Yep, the Github repo has been the focus of the project for the past year. The Gym site looks cool but hasn't been maintained". @gdb. Archived from the original on September 13, 2017. Retrieved November 7, 2017.
- "AI Sumo Wrestlers Could Make Future Robots More Nimble". Wired. October 11, 2017. Archived from the original on November 7, 2017. Retrieved November 2, 2017.
- "OpenAI's Goofy Sumo-Wrestling Bots Are Smarter Than They Look". MIT Technology Review. Archived from the original on November 9, 2018. Retrieved November 2, 2017.
- Savov, Vlad (August 14, 2017). "My favorite game has been invaded by killer AI bots and Elon Musk hype". The Verge. Archived from the original on June 26, 2018. Retrieved June 25, 2018.
- Frank, Blair Hanley. "OpenAI's bot beats top Dota 2 player so badly that he quits". Venture Beat. Archived from the original on August 12, 2017. Retrieved August 12, 2017.
- "Dota 2". blog.openai.com. August 11, 2017. Archived from the original on August 11, 2017. Retrieved August 12, 2017.
- "More on Dota 2". blog.openai.com. August 16, 2017. Archived from the original on February 23, 2019. Retrieved August 16, 2017.
- Simonite, Tom. "Can Bots Outwit Humans in One of the Biggest Esports Games?". Wired. Archived from the original on June 25, 2018. Retrieved June 25, 2018.
- Kahn, Jeremy (June 25, 2018). "A Bot Backed by Elon Musk Has Made an AI Breakthrough in Video Game World". Bloomberg.com. Bloomberg L.P. Archived from the original on June 27, 2018. Retrieved June 27, 2018.
- Clifford, Catherine (June 28, 2018). "Bill Gates says gamer bots from Elon Musk-backed nonprofit are 'huge milestone' in A.I." CNBC. Archived from the original on June 28, 2018. Retrieved June 29, 2018.
- "OpenAI Five Benchmark". blog.openai.com. July 18, 2018. Archived from the original on February 13, 2019. Retrieved August 25, 2018.
- Vincent, James (June 25, 2018). "AI bots trained for 180 years a day to beat humans at Dota 2". The Verge. Archived from the original on June 25, 2018. Retrieved June 25, 2018.
- Savov, Vlad (August 6, 2018). "The OpenAI Dota 2 bots just defeated a team of former pros". The Verge. Archived from the original on August 7, 2018. Retrieved August 7, 2018.
- Simonite, Tom. "Pro Gamers Fend off Elon Musk-Backed AI Bots—for Now". Wired. Archived from the original on August 24, 2018. Retrieved August 25, 2018.
- Quach, Katyanna. "Game over, machines: Humans defeat OpenAI bots once again at video games Olympics". The Register. Archived from the original on August 25, 2018. Retrieved August 25, 2018.
- "The International 2018: Results". blog.openai.com. August 24, 2018. Archived from the original on August 24, 2018. Retrieved August 25, 2018.
- Statt, Nick (April 13, 2019). "OpenAI's Dota 2 AI steamrolls world champion e-sports team with back-to-back victories". The Verge. Archived from the original on April 15, 2019. Retrieved July 20, 2019.
- "How to Train Your OpenAI Five". OpenAI Blog. April 15, 2019. Archived from the original on June 30, 2019. Retrieved July 20, 2019.
- Wiggers, Kyle (April 22, 2019). "OpenAI's Dota 2 bot defeated 99.4% of players in public matches". Venture Beat. Archived from the original on July 11, 2019. Retrieved April 22, 2019.
- Fangasadha, Edbert Felix; Soeroredjo, Steffi; Anderies; Gunawan, Alexander Agung Santoso (September 17, 2022). "Literature Review of OpenAI Five's Mechanisms in Dota 2's Bot Player". 2022 International Seminar on Application for Technology of Information and Communication (ISemantic). IEEE. pp. 183–190. doi:10.1109/iSemantic55962.2022.9920480. ISBN 978-1-6654-8837-2. S2CID 253047170.
- "Gym Retro". OpenAI. May 25, 2018. Archived from the original on February 12, 2023. Retrieved February 12, 2023.
- Greene, Tristan (May 4, 2018). "OpenAI's Debate Game teaches you and your friends how to lie like robots". The Next Web. Archived from the original on May 5, 2018. Retrieved May 31, 2018.
- "Why Scientists Think AI Systems Should Debate Each Other". Fast Company. May 8, 2018. Archived from the original on May 19, 2018. Retrieved June 2, 2018.
- Vincent, James (July 30, 2018). "OpenAI sets new benchmark for robot dexterity". The Verge. Archived from the original on February 12, 2023. Retrieved February 12, 2023.
- OpenAI; Andrychowicz, Marcin; Baker, Bowen; Chociej, Maciek; Józefowicz, Rafał; McGrew, Bob; Pachocki, Jakub; Petron, Arthur; Plappert, Matthias; Powell, Glenn; Ray, Alex; Schneider, Jonas; Sidor, Szymon; Tobin, Josh; Welinder, Peter; Weng, Lilian; Zaremba, Wojciech (2019). "Learning Dexterous In-Hand Manipulation". arXiv:1808.00177v5 [cs.LG].
- OpenAI; Akkaya, Ilge; Andrychowicz, Marcin; Chociej, Maciek; Litwin, Mateusz; McGrew, Bob; Petron, Arthur; Paino, Alex; Plappert, Matthias; Powell, Glenn; Ribas, Raphael (2019). "Solving Rubik's Cube with a Robot Hand". arXiv:1910.07113v1 [cs.LG].
- "OpenAI API". OpenAI. June 11, 2020. Archived from the original on June 11, 2020. Retrieved June 14, 2020.
Why did OpenAI choose to release an API instead of open-sourcing the models?
There are three main reasons we did this. First, commercializing the technology helps us pay for our ongoing AI research, safety, and policy efforts. Second, many of the models underlying the API are very large, taking a lot of expertise to develop and deploy and making them very expensive to run. This makes it hard for anyone except larger companies to benefit from the underlying technology. We're hopeful that the API will make powerful AI systems more accessible to smaller businesses and organizations. Third, the API model allows us to more easily respond to misuse of the technology. Since it is hard to predict the downstream use cases of our models, it feels inherently safer to release them via an API and broaden access over time, rather than release an open source model where access cannot be adjusted if it turns out to have harmful applications. - "TechCrunch Startup and Technology News". TechCrunch. June 11, 2020. Archived from the original on June 12, 2020. Retrieved June 11, 2020.
If you've ever wanted to try out OpenAI's vaunted machine learning toolset, it just got a lot easier. The company has released an API that lets developers call its AI tools in on "virtually any English language task."
- "GPT-1 to GPT-4: Each of OpenAI's GPT Models Explained and Compared". April 11, 2023.
- "Improving Language Understanding by Generative Pre-Training" (PDF). Archived (PDF) from the original on January 26, 2021. Retrieved June 9, 2020.
- Hern, Alex (February 14, 2019). "New AI fake text generator may be too dangerous to release, say creators". The Guardian. Archived from the original on February 14, 2019. Retrieved February 14, 2019.
- Schwartz, Oscar (July 4, 2019). "Could 'fake text' be the next global political threat?". The Guardian. Archived from the original on July 16, 2019. Retrieved July 16, 2019.
- Vincent, James (February 14, 2019). "OpenAI's new multitalented AI writes, translates, and slanders". The Verge. Archived from the original on December 18, 2020. Retrieved July 16, 2019.
- "GPT-2: 1.5B Release". OpenAI. November 5, 2019. Archived from the original on November 14, 2019. Retrieved November 14, 2019.
- "Write With Transformer". Archived from the original on December 4, 2019. Retrieved December 4, 2019.
- "Talk to Transformer". Archived from the original on December 4, 2019. Retrieved December 4, 2019.
- "CreativeEngines". Archived from the original on February 3, 2023. Retrieved June 25, 2021.
- Language Models are Unsupervised Multitask Learners (PDF), archived (PDF) from the original on December 12, 2019, retrieved December 4, 2019
- Ganesh, Prakhar (December 17, 2019). "Pre-trained Language Models: Simplified". Archived from the original on February 3, 2023. Retrieved September 9, 2020.
The intuition behind pre-trained language models is to create a black box which understands the language and can then be asked to do any specific task in that language.
- "openai/gpt-3". OpenAI. May 29, 2020. Archived from the original on November 14, 2020. Retrieved May 29, 2020.
- Sagar, Ram (June 3, 2020). "OpenAI Releases GPT-3, The Largest Model So Far". Analytics India Magazine. Archived from the original on August 4, 2020. Retrieved June 14, 2020.
- Brown, Tom; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini (June 1, 2020). "Language Models are Few-Shot Learners". p. appendix. arXiv:2005.14165 [cs.CL].
- Language Models are Unsupervised Multitask Learners (PDF), archived (PDF) from the original on December 12, 2019, retrieved December 4, 2019,
GPT-2, is a 1.5B parameter Transformer
- Brown, Tom; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini (June 1, 2020). "Language Models are Few-Shot Learners". arXiv:2005.14165 [cs.CL].
Since we increase the capacity by over two orders of magnitude from GPT-2 to GPT-3
- Ray, Tiernan (2020). "OpenAI's gigantic GPT-3 hints at the limits of language models for AI". ZDNet. Archived from the original on June 1, 2020. Retrieved June 5, 2020.
- Amodei, Dario; Hernandez, Danny (May 16, 2018). "AI and Compute". Archived from the original on June 17, 2020. Retrieved August 30, 2020.
A petaflop/s-day (pfs-day) consists of performing 1015 neural net operations per second for one day, or a total of about 1020 operations. The compute-time product serves as a mental convenience, similar to kW-hr for energy.
- Eadicicco, Lisa. "The artificial intelligence company that Elon Musk helped found is now selling the text-generation software it previously said was too dangerous to launch". Business Insider. Archived from the original on November 14, 2020. Retrieved July 6, 2020.
- "OpenAI is giving Microsoft exclusive access to its GPT-3 language model". MIT Technology Review. Archived from the original on February 5, 2021. Retrieved September 24, 2020.
- "Microsoft gets exclusive license for OpenAI's GPT-3 language model". VentureBeat. September 22, 2020. Archived from the original on November 8, 2020. Retrieved September 24, 2020.
- Alford, Anthony (August 31, 2021). "OpenAI Announces 12 Billion Parameter Code-Generation AI Codex". InfoQ. Archived from the original on July 9, 2022. Retrieved September 3, 2021.
- Wiggers, Kyle (July 8, 2021). "OpenAI warns AI behind GitHub's Copilot may be susceptible to bias". VentureBeat. Archived from the original on February 3, 2023. Retrieved September 3, 2021.
- Zaremba, Wojciech (August 10, 2021). "OpenAI Codex". OpenAI. Archived from the original on February 3, 2023. Retrieved September 3, 2021.
- Dickson, Ben (August 16, 2021). "What to expect from OpenAI's Codex API". VentureBeat. Archived from the original on February 3, 2023. Retrieved September 3, 2021.
- Claburn, Thomas (August 25, 2021). "GitHub's Copilot may steer you into dangerous waters about 40% of the time – study". The Register. Archived from the original on February 3, 2023. Retrieved September 3, 2021.
- "GitHub Copilot: The Latest in the List of AI Generative Models Facing Copyright Allegations". Analytics India Magazine. October 23, 2022. Retrieved March 23, 2023.
- "OpenAI Might Invite Legal Trouble". Analytics India Magazine. March 21, 2023. Retrieved March 23, 2023.
- Wiggers, Kyle (September 21, 2022). "OpenAI open-sources Whisper, a multilingual speech recognition system". TechCrunch. Archived from the original on February 12, 2023. Retrieved February 12, 2023.
- Radford, Alec; Kim, Jong Wook; Xu, Tao; Brockman, Greg; McLeavey, Christine; Sutskever, Ilya (2022). "Robust Speech Recognition via Large-Scale Weak Supervision". arXiv:2212.04356 [eess.AS].
- Vincent, James (March 14, 2023). "OpenAI announces GPT-4—the next generation of its AI language model". The Verge. Archived from the original on March 14, 2023. Retrieved March 14, 2023.
- Wiggers, Kyle (March 14, 2023). "OpenAI releases GPT-4, a multimodal AI that it claims is state-of-the-art". TechCrunch. Archived from the original on March 15, 2023. Retrieved March 14, 2023.
- "OpenAI's MuseNet generates AI music at the push of a button". The Verge. April 2019. Archived from the original on June 28, 2019. Retrieved June 8, 2020.
- "MuseNet". OpenAI. April 25, 2019. Archived from the original on June 13, 2020. Retrieved June 8, 2020.
- "Arcade Attack Podcast – September (4 of 4) 2020 - Alex Hall (Ben Drowned) - Interview". Arcade Attack. September 28, 2020. Archived from the original on February 3, 2023. Retrieved January 29, 2023.
- "Archived copy". Archived from the original on February 3, 2023. Retrieved January 29, 2023.
{{cite web}}
: CS1 maint: archived copy as title (link) - "OpenAI introduces Jukebox, a new AI model that generates genre-specific music". The Verge. April 30, 2020. Archived from the original on June 8, 2020. Retrieved June 8, 2020.
- Stephen, Bijan (April 30, 2020). "OpenAI introduces Jukebox, a new AI model that generates genre-specific music". Business Insider. Archived from the original on June 8, 2020. Retrieved June 8, 2020.
- "Jukebox". OpenAI. April 30, 2020. Archived from the original on June 8, 2020. Retrieved June 8, 2020.
- "OpenAI Microscope". April 14, 2020. Archived from the original on February 3, 2023. Retrieved March 27, 2021.
- Johnson, Khari (April 14, 2020). "OpenAI launches Microscope to visualize the neurons in popular machine learning models". VentureBeat. Archived from the original on February 12, 2023. Retrieved February 12, 2023.
- "OpenAI Microscope". OpenAI Microscope. Archived from the original on February 3, 2023. Retrieved March 27, 2021.
- "DALL·E: Creating Images from Text". January 5, 2021. Archived from the original on March 27, 2021. Retrieved March 27, 2021.
- "CLIP: Connecting Text and Images". January 5, 2021. Archived from the original on March 25, 2021. Retrieved March 27, 2021.
- "DALL·E 2". OpenAI. Archived from the original on April 6, 2022. Retrieved April 6, 2022.
- "ChatGPT: A scientist explains the hidden genius and pitfalls of OpenAI's chatbot". BBC Science Focus Magazine. 2022. Archived from the original on February 3, 2023. Retrieved December 30, 2022.
- "Mira Murati via Twitter". Mira Murati. December 5, 2022. Archived from the original on December 14, 2022. Retrieved December 15, 2022.
- Wiggers, Kyle (February 1, 2023). "OpenAI launches ChatGPT Plus, starting at $20 per month". TechCrunch. Archived from the original on February 12, 2023. Retrieved February 12, 2023.
- Lawler, Richard (July 25, 2023). "ChatGPT for Android is now available". The Verge. Retrieved August 17, 2023.
- "Introducing the ChatGPT app for iOS". openai.com. Retrieved August 17, 2023.
- "ChatGPT Android app FAQ | OpenAI Help Center". help.openai.com. Retrieved August 17, 2023.
- Perrigo, Billy (January 18, 2023). "Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer". Time. Retrieved August 5, 2023.
- Vincent, James (March 15, 2023). "OpenAI co-founder on company's past approach to openly sharing research: "We were wrong"". The Verge. Retrieved August 20, 2023.
- Belanger, Ashley (July 10, 2023). "Sarah Silverman sues OpenAI, Meta for being "industrial-strength plagiarists"". Ars Technica. Retrieved September 21, 2023.
- Krithika, K. L. (August 21, 2023). "Legal Challenges Surround OpenAI: A Closer Look at the Lawsuits". Analytics India Magazine. Retrieved August 23, 2023.
- Belanger, Ashley (September 20, 2023). "Grisham, Martin join authors suing OpenAI: "There is nothing fair about this" [Updated]". Ars Technica. Retrieved September 21, 2023.
- Korn, Jennifer (September 20, 2023). "George R. R. Martin, Jodi Picoult and other famous writers join Authors Guild in class action lawsuit against OpenAI | CNN Business". CNN. Retrieved September 21, 2023.
- Lomas, Natasha (August 30, 2023). "ChatGPT-maker OpenAI accused of string of data protection breaches in GDPR complaint filed by privacy researcher". TechCrunch. Retrieved September 2, 2023.
- Woollacott, Emma (September 1, 2023). "OpenAI Hit With New Lawsuit Over ChatGPT Training Data". Forbes. Retrieved September 2, 2023.
External links
- Official website
- OpenAI on Twitter
- "What OpenAI Really Wants" by Wired
- "ChatGPT Hyped or Worth?" by TheFridayTimes