Wouldn’t it cut down on search queries (and thus save resources) if I could search for “this is my phrase” rather than rawdogging it as an unbound series of words, each of which seems to be pulling up results unconnected to the other words in the phrase?
There are only 2 reasons I can think of why a website’s search engine lacks this incredibly basic functionality:
- The site wants you to spend more time there, seeing more ads and padding out their engagement stats.
- They’re just too stupid to know that these sorts of bare-bones search engines are close to useless, or they just don’t think it’s worth the effort. Apathetic incompetence, basically.
Is there a sound financial or programmatic reason for running a search engine which has all the intelligence of a turnip?
Cheers!
EDIT: I should have been a bit more specific: I’m mainly talking about search engines within websites (rather than DDG or Google). One good example is BitTorrent sites; they rarely let you define exact phrases. Most shopping websites, even the behemoth Amazon, don’t seem to respect quotation marks around phrases.
It’s cheaper for them not to do it and you’ll still search so they don’t care.
Not any more. I use an offline open source LLM first quite a bit now because it is better than their junk. It may only be accurate 80% of the time, but that is a far higher percentage than any present search engine.
People complain about web scrapers, but scraping is the only practical alternative for finding info and sources now that the web crawlers are worse than trash.
No. The issue is websites are trash, not the crawlers. SEO has created a weird amalgamation of content, filler, and keywords. It’s why recipe sites have stories with every recipe.
Google very much is responsible for the current web design though.
Sadly, and honestly, this.
Using an LLM with 4-year-old data is a better experience than digging through three pages of Google blog spam