{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

OpenAI’s new frontier models mark a huge change in how AI will be built

In early March, OpenAI unleashed a one-two punch, dropping two major frontier models just days apart.

First, we got the new GPT-5.3, an “instant” model optimized for fast, accurate responses.

Then, OpenAI released GPT-5.4 two days later. This is a “thinking” model optimized for deep analytical work.

I was a beta tester for OpenAI in the early days, and today I spend hundreds of dollars per month using their models through the OpenAI API. 

I’ve tested both GPT-5.3 and 5.4 extensively since their launch. The new models represent a totally different approach, and hint at a major change in how big AI companies build their tech.

The doer

OpenAI’s first new model, GPT-5.3, is built for speed. GPT-5.3 generally responds to queries within seconds.

In its release notes for the new model, OpenAI says that GPT-5.3 is built to be a snappy, clever writer and a fast communicator.

“GPT‑5.3 Instant delivers more accurate answers, richer and better-contextualized results when searching the web, and reduces unnecessary dead ends, caveats, and overly declarative phrasing that can interrupt the flow of conversation,” the company says.

The model is different from other instant models OpenAI has released before. Previously, the company’s instant models seemed to rely almost exclusively on their world knowledge to answer questions. 

In my experience, instead of crawling the Internet for fresh data, those earlier instant models often fell back on what they’d learned during their initial training.

This approach does indeed result in lightning-fast responses. But it meant that OpenAI’s previous instant models were, to put it frankly, kind of dumb.

If you wanted to quickly know the capital of California (Sacramento) or determine whether the plant you just touched was poison oak (Yes), you could send a photo or pose a query to earlier instant models and get a decent response.

If you wanted to know about current events or news, though, the models struggled. Because they relied on pre-trained world knowledge, they were often stuck in the past, and struggled to integrate new information.

In the ultimate irony, OpenAI’s early instant models seemed not to know about their own existence. I recall chatting with an instant version of GPT-5.1. The model swore up and down that it didn’t exist, and that GPT-5 was the latest OpenAI model. 

Why? Because at the time the model was trained, it indeed did not yet exist.  Because it was stuck in that prior world, the model was unable to comprehend even this most basic snippet of new information.

GPT-5.3 is different. It still relies heavily on its pre-trained world knowledge. But OpenAI says that it has been optimized to quickly browse and make sense of information it finds on the internet, and via other sources.

The model “…more effectively balances what it finds online with its own knowledge and reasoning—for example, using its existing understanding to contextualize recent news rather than simply summarizing search results,” according to OpenAI’s release notes.

The new model is also notably less timid. Instant models have limited time to think deeply about a user’s query and understand their intent. In the past, that meant they tended to give vague, equivocal answers to queries with even the remote possibility of causing harm.

OpenAI gives the example of a person asking about the proper trajectory needed for an arrow to hit an archery target. That’s the kind of simple physics problem somebody might pose if they were practicing for an AP exam–or simply trying to learn archery.

Before, instant models often started their responses by scolding the user. They’d warn that firing arrows might be dangerous, for example, and either provide a wussy non-response or write several paragraphs of disclaimers before giving the answer.

OpenAI says that GPT-5.3 does a much better job of correctly understanding the context of users’ questions. That lets it quickly understand that a user asking about trajectories isn’t trying to murder someone with a bow and arrow. The model can thus answer the user’s questions without lots of equivocating and hedging.

In my testing so far, all these changes do appear to genuinely work well. GPT-5.3 is the first instant model I’ve used that doesn’t feel like a dumbed-down version of OpenAI’s thinking versions.

Instead, it feels like a full frontier model that can do nearly everything previous thinking models were able to accomplish–only much faster and with snappier, more engaging prose.

The thinker

GPT-5.3’s speed and cleverness free up GPT-5.4 to be something entirely different.

Where GPT-5.3 is the “doer”–quickly cranking out a decent version of a response to any query–GPT-5.4 is very much the “thinker.”

The model explores deeply before responding to queries. In my own testing, it sometimes took as long as five to ten minutes to get back to me on complex requests.

Like many scientific or analytical people, the model is extremely detail-oriented and comprehensive in its responses. And like some of those people, it’s also a little dull.

Reading its responses feels a bit like perusing the instruction manual for your toaster or slogging through a fascinating but pedantic scientific paper. You learn a lot, but it’s not exactly scintillating stuff.

Again, that marks a new approach. Before, OpenAI’s thinking models tried to do everything—craft code, analyze scientific problems at a deep level, and write in a compelling and creative way.

Like many human jacks-of-all-trades, this meant that the models did everything decently, but no one thing exceptionally well.

Because GPT-5.4 seems to abandon the idea of writing creatively or responding in a snappy and pleasant way, it gains the space to excel at what it was built to do— crunch numbers, build software, and analyze data.

The bichon test

To compare the models, I gave both a simple prompt: “Choose a specific topic related to Bichon Frises and then write an article about it.”

GPT-5.3 responded instantly with an article titled “Why Bichon Frises Are One of the Best Dogs for Apartment Living.”

Structured as a listicle, the article had a well-craft introduction that cleanly transitioned into the main topic. 

It included helpful, well-written notes about the breed’s size (“A Bichon can curl up beside you on the couch, nap in a small bed near your desk, and move around a one-bedroom apartment without constantly feeling underfoot.”), temperament, and more.

In contrast, GPT-5.4 chose to expound at length about the problem of Bichon Frise tear stains. Its article was filled with unbearably dry nuggets like this little doozy of a paragraph:

“Tear stains are primarily caused by molecules called porphyrins. These iron-containing pigments are naturally present in tears and saliva.When tears sit on a dog’s fur for extended periods, the porphyrins oxidize when exposed to air. That oxidation produces the rusty red or brown color you see beneath the eyes.”

GPT-5.4 feels a bit like the guy you’d consult if you needed help doing your taxes or wanted to better understand particle physics. 

But you really wouldn’t want to get stuck next to him at a party. The model is fantastic at complex analytical tasks, but appears deliberately built to eschew the creative, communicative side of work.

A better approach?

At first, I found this bifurcated approach challenging.

Before, I could simply default to using the most up-to-date thinking model available from OpenAI. 

These models were clearly the “premium” version of OpenAI’s lineup. The instant models felt built for people who couldn’t be bothered to shell out $20 for ChatGPT access. 

Under OpenAI’s new approach, though, that divide isn’t so clear.

I find  that when I need help researching something deeply or doing anything involving numbers and data, I turn to GPT-5.4.

Breaking down the stats from my YouTube channel, comparing the relative merits of Starlink and Comcast Business—those are the kinds of things that I use 5.4 to do.

When I want to converse with a chatbot for a quick (if somewhat cursory) answer, I find myself using the 5.3 model more and more.

Recent personal queries I’ve posted to GPT-5.3 include “Why do we yawn?” (to cool the brain), “What’s this weird coin I found in my closet?” (1936 British One-Penny), and “How do I clean fabric webbing?” (With vinegar).

I’ve also used the model at work for simple Python questions, background research, and easy but tedious tasks like calculating the square footage of a room based on a series of measurements.

One thing I’ve realized in using GPT-5.3 is that speed matters more than I thought.

Previously, OpenAI’s instant models were too underpowered to be of much use for anything but the simplest of queries. Power users like me would always turn to the thinking models, which took as long as 5 minutes to render a response.

Now that GPT-5.3 is good enough to provide genuinely useful responses, I’m seeing how nice it is to get data back instantly. 

A few minutes of waiting for responses from a chatbot, sprinkled throughout a workday, doesn’t feel like much. But those minutes add up. I find I can work faster and better now that I can use GPT-5.3 for more things, and get answers right away.

Based on what I’ve seen so far, I expect OpenAI will continue down this new, split model-building path.

GPT-5.3 is snappy, and in many ways works better than GPT-5.4. But it’s also probably much cheaper to run.

Because the model presumably relies more on its pre-trained world knowledge, it likely burns through far fewer tokens to perform its work than a thinking model.

If more power users like me find they can genuinely rely on an instant model for good responses, that will reduce the number of people who turn to the more expensive thinking models for everyday queries.

That should allow OpenAI to reach profitability faster by cutting its costs while still collecting the same $20 (or more)  per month from users like me.

Longer term, if this approach proves fruitful, it’s possible that we’ll see a shift away from the use of thinking models entirely.

For a while, the extra work that these models did yield a notably better response. With GPT-5.3, that no longer seems to be a given.

If OpenAI can continue to improve its instant models, we may see a swing back toward quick-and-good-enough LLMs, and away from the slow, meticulous ones that are in vogue today.

Those slower, more powerful models might become the purview of coders and data analysts, with everyone else relying on increasingly powerful instant ones. That would speed up the experience of interacting with LLMs, and help AI companies scale by dramatically reducing their costs.

We’re not there yet. But OpenAI’s new pair of models is a big shift in the industry, and a tantalizing step in that new direction.

Ria.city






Read also

Shopify’s USDC Integration Shows How Platforms Could Pick Stablecoin Favorites

Ontario man wanted for two Mafia murders arrested in Mexico after 8 years on the run

Oil and Gas Experts in State Department Fired in DOGE Efforts Prior to Iran War

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости