Perplexity, Copilot, the AI ​​search engines put to the test

AI is coming for the search industry. Or so we are told. Now that Google seems to be getting worse and tools like ChatGPT, Google Gemini and Microsoft Copilot are getting better and better, we seem to be moving towards a new way to find and consume information online. Companies like Perplexity and are pitching themselves as next-generation search products, and even Google and Bing are betting that AI is the future of search. Goodbye, 10 blue links; hello direct answers to all my weird questions about the world.

But what you need to understand about a search engine is that a search engine includes many things. For all the people who use Google to find important and difficult-to-access scientific information, many more use it to find their email inbox, go to Walmart’s website, or remember who was president before Hoover. And then there’s my favorite fact: every year, a large number of people go to Google and type “google” into the search box. We usually talk about Google as a research tool, but in reality Google is asked billions of times a day to do everything you can imagine.

So the real question facing all these would-be Google killers isn’t how well they can find information. It’s about how well they can do everything Google does. So I decided to put some of the best new AI products to the real test: I grabbed the latest list of most Googled searches and questions according to SEO research firm Ahrefs and plugged them into several AI tools. In some cases, I found that these language model-based bots are actually more useful than a Google results page. But in most cases, I discovered exactly how difficult it will be for anything – AI or otherwise – to replace Google at the center of the internet.

People who search always say that there are basically three types of searches. The first and most popular is navigation, where people simply type the name of a website to go to that website. Nearly all popular searches on Google, from “youtube” to “wordle” to “yahoo mail,” are navigation queries. In reality, this is the main job of a search engine: to direct you to a website.

In reality, a search engine’s main job is to direct you to a website

For navigation queries, AI search engines are universally worse than Google. When you perform a navigational search on Google, it is extremely rare that the first result is not the one you are looking for. Of course, it’s strange to show you all those results when Google should actually take you straight to Amazon. com or whatever, but it’s fast and it rarely goes wrong. The AI ​​bots, on the other hand, like to think for a few seconds and then provide a bunch of quasi-useful information about the company, while all I want is a link. Some didn’t even link to

I don’t hate the additional information so much as I hate how long these AI tools are necessary to give me what I need. Waiting ten seconds for three paragraphs of generated text about Home Depot isn’t the answer; I just want a link to Home Depot. Google wins that race every time.

The next most popular form of search is the information query: you want to know something specific, to which there is only one correct answer. “NFL scores” is a hugely popular information query; “what time is it” is another; this also applies to ‘weather’. It doesn’t matter who tells you the score, the time or the temperature, it’s just something you need to know.

a:hover]:text-gray-63 [&>a:hover]:shadow-underline-black dark:[&>a:hover]:text-gray-bd dark:[&>a:hover]:shadow-underline-gray [&>a]:shadow-underline-gray-63 dark:[&>a]:text-gray-bd dark:[&>a]:shadow-underline-gray”>Screenshot: David Pierce / The Verge

Here are the results all over the map. For real-time things like sports scores, the AI ​​can’t be trusted: and Perplexity both often gave me outdated information, although Copilot was usually right. Google not only does it well, but usually shows a widget with different statistics and information, which is better than the others. Ditto anything that requires your specific location or context: Google probably has that information about you, but the AI ​​bots usually don’t.

When it comes to more evergreen information, like “how many weeks in a year” or “when is Mother’s Day,” everything I tested was correct. In many cases I actually preferred the AI ​​responses, which add a bit of useful context. But I’m not sure how often I can trust them. Google told me there are 52.1429 weeks in a year, but explained that it’s actually 52 weeks and a day, plus an extra day for leap years. That’s more useful than just 52.1429! But then Perplexity told me that a regular year is actually 52 weeks, and a leap year is 52 weeks and a day – before directly contradicting itself two sentences later. Here’s the whole answer; just try to understand it:

A regular year has approximately 52 weeks, while a leap year has 52 weeks and 1 day. More precisely, a regular year actually consists of 52,143 weeks, which means that in a normal year there is one extra day. On the other hand, a leap year, which occurs every four years, with certain exceptions, has 52 weeks and 2 days. This difference in the number of weeks is caused by the extra day in a normal year and the extra day in February during a leap year.

After doing some more research, I’m now confident the answer is what said. But this all took too long, and forcing me to fact-check my searches defeats the purpose of usefully summarizing things. Google continues to win here on one thing and one thing alone: ​​speed.

However, there is one subgenre of information questions in which the exact opposite is true. I call them buried information requests. The best example I can offer is the very popular question: “how to take a screenshot on Mac.” There are a million pages on the Internet that contain the answer – it’s just Cmd-Shift-3 to take up the entire screen or Cmd-Shift-4 to capture a selection, you’re welcome – but that information is mostly hidden under a lot of ads and SEO nonsense. All the AI ​​tools I’ve tried, including Google’s own Search Genative Experience, simply extract that information and give it straight to you. This is amazing!

a:hover]:text-gray-63 [&>a:hover]:shadow-underline-black dark:[&>a:hover]:text-gray-bd dark:[&>a:hover]:shadow-underline-gray [&>a]:shadow-underline-gray-63 dark:[&>a]:text-gray-bd dark:[&>a]:shadow-underline-gray”>Screenshot: David Pierce / The Verge

Are there inherently complex questions associated with this that threaten the business model and structure of the Internet? Yes! But as a pure search experience it’s much better. I’ve had similar results when asking about ingredient substitutions, coffee ratios, headphone waterproofing, and other information that’s easy to know yet often too hard to find.

This brings me to the third type of Google search: the exploration query. These are questions to which there is no clear answer, but which represent the beginning of a learning process. On the most popular list, things like ‘how to tie a tie’, ‘why were chainsaws invented’ and ‘what is tiktok’ count as exploration questions. If you’ve ever Googled the name of a musician you just heard of, or looked up things like “things to do in Helena Montana” or “NASA history,” you’re discovering. According to the rankings, these are not the most important things people use Google for. But these are the moments where AI search engines can shine.

Wait: why goods invented chainsaws? Copilot gave me a multi-part answer about their medical origins, before describing their technological evolution and eventual adoption by loggers. It also gave me eight pretty useful links to read more. Perplexity gave me a much shorter answer, but also included some cool images of old chainsaws and a link to a YouTube explanation on the subject. Google’s results contained many of the same links, but for me there was nothing to worry about. Even the generative search only gave me the basics.

The best thing about the AI ​​engines are the quotes. Perplexity, and others are slowly getting better at linking to their sources, often inline, meaning that if I come across a particular fact that piques my interest, I can go straight to the source from there. They don’t always provide enough resources or put them in the right places, but this is a good and useful trend.

One experience I had while running these tests was actually the most eye-opening of them all. The most searched question on Google is simple: ‘what should you watch’. Google has a very specific page design for this, with rows of ‘Top Picks’ posters such as Dune: part two And Imaginary; “For you,” which for me was inclusive Deadpool And Stop and catch fire; and then popular titles and options sorted by genre. None of the AI ​​search engines fared as well: Copilot listed five popular movies; Bewilderment presented a seemingly arbitrary number of options Girls5eva Unpleasant Quest Unpleasant Shogun; gave me a bunch of outdated information and recommended I watch “the 14 best Netflix original movies” without telling me what they are.

AI is the right idea, but a chatbot is the wrong interface

In this case, AI is the right idea – I don’t want a pile of links, I want an answer to my question – but a chatbot is the wrong interface. This also applies to a page with search results! Google, obviously aware that this is the most frequently asked question on the platform, has been able to design something that works much better.

In a way, that’s a perfect summary of the state of affairs. At least for some web searches, generative AI could be a better tool than the search technology of decades past. But modern search engines aren’t just pages with links. They seem more like miniature operating systems. They can answer questions instantly, they have calculators and converters, flight selectors and all kinds of other tools built in, they can get you to your destination in just a few clicks. According to these charts, the purpose of most searches is not to embark on a journey of information wonder and discovery. The goal is to get a link or an answer and then leave. Right now, these LLM-based systems are simply too slow to compete.

I think the big question is less about technology and more about products. Everyone, including Google, believes that AI can help search engines understand queries and process information better. That’s a given in the industry right now. But can Google reinvent its results pages, its business model and the way it presents, summarizes and surfaces information faster than the AI ​​companies can turn their chatbots into more complex, more versatile tools? Ten blue links aren’t the answer to search, but neither is a universal text box. Search is everything, and everything is search. It will take a lot more than a chatbot to kill Google.

Leave a Reply

Your email address will not be published. Required fields are marked *