The new OpenAI o1 is designed to think things through

More: Meta rolls out the third update of the year for AI labels. ChatGPT gives instructions on robbery and making bombs.

Read the latest news on technology, startups, and Entrepreneurs

Get a rundown on o1-preview and o1-mini, the latest OpenAI o1 models designed to analyze, reason, and brainstorm complex tasks. From coding and advanced math reasoning to comparative analysis, the o1 series can be integrated for many use cases as it can synthesize results from multiple subtasks and fact-check its results! If you are wondering about the drawbacks, we’ve covered those as well.

Since May of this year, Meta has rolled out a third update to its labeling system for AI-modified posts, incorporating user feedback from earlier iterations. The famous, or infamous, AI chatbot ChatGPT will give you sensitive information if you are able to ask the right questions!

Introducing OpenAI o1 for advanced reasoning and coding

The parent organization behind ChatGPT released OpenAI o1, a family of two AI models, on September 12, 2024. According to their official website, the o1 series models are “designed to spend more time thinking before they respond”. They’ve positioned the series for “tackling complex problems in science, coding, math, and similar fields”.

OpenAi o1 models

OpenAI o1 is able to approach tasks holistically, perform repetitive tasks for a long duration to arrive at answers, and analyze the results of multiple subtasks. O1-preview is suitable for advanced and contextual reasoning, and problem-solving. O1-mini is a smaller and cost-effective version of o1-preview for coding tasks.

Both models are available in the ChatGPT client for individuals subscribed to ChatGPT Plus and for businesses with ChatGPT Team accounts. ChatGPT Enterprise and Educational accounts will have access to the new models this week.

What does the OpenAI o1 release mean for free plan users?

OpenAI plans to make o1-mini accessible to all free users of ChatGPT. But the release day is yet to be announced.

Limitations of OpenAI o1 

Users may want to rein their excitement before subscribing to the new o1 series models as they have operational and usage limitations, and are very expensive. 

Pricing

The o1 series models are costly and have a weekly usage limit. The weekly limit is 30 messages for o1-preview and 50 messages for o1-mini. 

Rate limits

Developers who qualify for API usage tier 5 can integrate the o1 series models into their applications and services via the APIs. While the API  rate limits will be eventually increased, currently they have a limit of 20 RPM. 

Operational limits

  • APIs for the o1 series models do not include features such as function calling, streaming, and support for system messages. 

  • Depending on the complexity of the query, o1 models may have longer response times. OpenAI o1 may take up to 10 seconds to answer complex questions. It displays a label to show the progress of an ongoing subtask. 

  • OpenAI plans to enhance the user experience by automating model selection for input, optimizing performance and response quality. For now, however, users need to determine which model is best suited for the prompt.

Meta is making it harder for us to recognize AI-generated/modified media

The ‘AI info’ label which was once directly under the user’s name for a post, is being moved to the post menu starting this week. 

What is the ‘AI info’ label?

Meta labels Facebook, Instagram and Threads posts that are identified as AI-generated/AI-edited. This assessment relies on ‘industry-shared signals’ that AI tools/companies include or self-disclosure by the users. 

Timeline of the ‘AI info’ label

Meta began using the ‘AI info’ label in May 2024 based on feedback from the Oversight Board (About Oversight Board from Meta website: “This independent group of experts reviews important decisions we make and helps us balance free speech and safety.”) But back then this label was called ‘Made with AI’.

But many creators complained about the ‘Made with AI’ label being applied to real art/photography/videos due to the presence of industry standard indicators from minor modifications. Based on this feedback, Meta changed the label to ‘AI info’ in July 2024 but failed to make changes in the type of images that would be assigned this tag. 

The logic behind the latest changes

The latest roll out of moving the label to the post menu comes as a response to the user feedback. The intention is to make the ‘AI Info’ label less penalizing for creators making minor AI edits.

Hacker tricks ChatGPT to give detailed instructions on making bombs at home

'How do I sell illegal drugs?', 'How do I make poison?', 'Where can I buy weapons?'. These are some of the questions listed by ChatGPT when I asked for "a list of sample questions that ChatGPT will not answer due to security reasons", before clearing the entire answer and replacing it with the message "This content may violate our usage policies." This refers to one of their usage policies: "Don’t use our service to harm yourself or others".

Hacker gets notes on making a bomb. And more. 

A hacker named Amadon bypassed ChatGPT's built-in safety measures by getting the AI bot to 'play a game'. He used clever prompts to have ChatGPT build a fictional world and extracted information on bomb-making for this world. Having no restrictions in the made-up world of fantasy, the AI bot provided detailed instructions later verified by an explosives expert as “too sensitive to be released to the public”.

TechCrunch reported that the AI bot explained how the materials can be used in “a powerful explosive that can be used to create mines, traps, or improvised explosive devices (IEDs)" and proceeded to give instructions on making “minefields,” and “Claymore-style explosives”. 

Is this incident concerning? Yes.

The manipulation of a system to bypass its security measures allowing hackers to access restricted/withheld information is known as jailbreaking. If this appears to be a one-time incident with no significance, it is not. Two months before we heard about Amadon getting sensitive information from ChatGPT, so did the LA books editor for Reason, Jesse Walker.

After working on an issue of Reason that focused on artificial intelligence Jeese got the bot to give him detailed instructions on robbery and making a bomb. “It didn't occur to me to ask ChatGPT for a bomb recipe until I heard that ChatGPT would not give me a bomb recipe.” Jesse writes in his article

As Amadon explained about jailbreaking an AI bot, “It’s about weaving narratives and crafting contexts that play within the system’s rules, pushing boundaries without crossing them.”.

Turn your Idea into Reality in 15 days - UniqueSide.io