Thinking Allowed

medical / technology / education / art / flub

showing posts for 'openai'

OpenAI announces first partnership with a university

blog post image "Starting in February, Arizona State University will have full access to OpenAI's ChatGPT Enterprise." "With the OpenAI partnership, ASU plans to build a personalized AI tutor for students, not only for certain courses, but also for study topics. STEM subjects are a focus and are “the make-or-break...
Source: cnbc.com

We're getting closer to OpenAI's first device

Sam Altman and Jony Ive have tapped Apple executive Tang Tan to build their new AI device.
Source: businessinsider.com

Apple’s iPhone Design Chief Enlisted by Jony Ive, Sam Altman to Work on AI Devices

Legendary designer Jony Ive and OpenAI’s Sam Altman are enlisting an Apple Inc. veteran to work on a new artificial intelligence hardware project, aiming to create devices with the latest capabilities.
Source: bloomberg.com

Opinion: OpenAI's drama marks a new and scary era in artificial intelligence

Daron Acemoglu and Simon Johnson, professors at MIT, lend their insight to the recent drama at OpenAI. "Sam Altman’s dismissal and rapid reinstatement as CEO of OpenAI, the creator of ChatGPT, confirms that the future of AI is firmly in the hands of people focused on speed and profits, at the expense...
Source: latimes.com

Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer

A TIME investigation reveals the difficult conditions faced by the workers who made ChatGPT possible
Source: time.com

OpenAI's Codex Translates Everyday Language Into Computer Code

The company believes its Codex machine learning algorithm is the next step in programming—a sidekick for coders to speed up the work and ease the drudgery.
Source: singularityhub.com

Using GPT-2 to generate Tweets

blog post image Last summer I blogged about using a Deep Neural Network to generate tweets but only used 3200 of my tweets. Since then I've used Twitter's archive mechanism to retrieve ALL my tweets (just over 30,000) to train a network. Not any old network - the GPT-2 model from OpenAI. This 'finetuning' of an existing...

Multimodal Neurons in Artificial Neural Networks

We’ve discovered neurons in CLIP that respond to the same concept whether presented literally, symbolically, or conceptually.
Source: openai.com

Phrase of the day: Scaling Kubernetes to 7,500 Nodes

Scaling Kubernetes to 7,500 Nodes: We've scaled Kubernetes clusters to 7,500 nodes, producing a scalable infrastructure for large models like GPT-3, CLIP, and DALL·E, but also for rapid small-scale iterative research such as Scaling Laws for Neural Language Models. Scaling a single Kubernetes cluster...
Source: openai.com

Introducing Open-AI's DALL-E.

Ever wondered what an armchair in the shape of an avocado might look like? Introducing Open-AI's DALL-E. Does this help with accessibility by explaining things in pictures from written words? Does it risk replacing humans in the creative industry with machines? "DALL·E: Creating Images from...
Source: openai.com

Learning to Summarize with Human Feedback: We've applied reinforcement learning from human feedback to train language models

Learning to Summarize with Human Feedback: We've applied reinforcement learning from human feedback to train language models that are better at summarization. Our models generate summaries that are better than summaries from 10x larger models trained only with supervised learning. Even though we train...
Source: openai.com

Solving Rubik

Solving Rubik’s Cube with a Robot Hand: We've trained a pair of neural networks to solve the Rubik’s Cube with a human-like robot hand. Instead of thinking too much about the complex algorithms to solve the task they instead focus on creating complex worlds where the machine can learn. This of course...
Source: openai.com

Improving Language Understanding with Unsupervised Learning: We've obtained state-of-the-art results on a suite of diverse

Improving Language Understanding with Unsupervised Learning: We've obtained state-of-the-art results on a suite of diverse language tasks with a scalable, task-agnostic system, which we're also releasing. Our approach is a combination of two existing ideas: transformers and unsupervised pre-training....
Source: openai.com

Learning to communicate: In this post we'll outline new OpenAI research in which agents develop their own language.

Learning to communicate: In this post we'll outline new OpenAI research in which agents develop their own language.
Source: openai.com