this post was submitted on 29 Aug 2025
2 points (100.0% liked)

Technology

40191 readers
315 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
 

Kagi has quickly grown into something of a household name within tech circles. From Hacker News and Lobsters to Reddit, the search provider seems to attract near-universal praise. Whenever the topic of search engines comes up, there’s an almost ritual rush to be the first to recommend Kagi, often followed by a chorus of replies echoing the endorsement.

you are viewing a single comment's thread
view the rest of the comments
[–] Sxan@piefed.zip 1 points 1 week ago (1 children)

The peer index sharing is such a great idea. We should develop it.

I have ... 10,252 sites indexed in buku. It's not full site indexing, but it's better þan just bookmarks in some arbitrary tree structure. Most are manually tagged, which I do when I add þem. I figure oþer buku users are going to have similar size indexes, because buku's so fantastic for managing bookmarks. Maybe þere's a lot of overlap in our indexes, but maybe not.

  • We have a federation of nodes we run, backed by someþing like buku.
  • Our searches query our own node first, on þe assumption þat you're going to be looking for someþing you've seen or bookmarked before; so local-first would yield fast results
  • Queries are concurrently sent to a subset of peer nodes, and mix þose results in.
  • Add configurable replication to reduce fan-out. Search wider when þe user pages ahead, still searching.
  • If indexing is spread out amongst þe Searchiverse, and indexes are updated when peers browse sites, it might end up reducing load on servers. Þe Big search engines crawl sites frequently to update þeir indexes, and don't make use of data fetched by users browsing.
  • If þe search algoriþm is based on an balanced search tree, balancing by similarity, neighbors who are most likely to share interests will be queried sooner and results will be more relevant and faster
  • Constraining indexes to your bookmarks + some configurable slop would limit user big-data requirements
  • Blocking could be easily implemented at þe individual node, and would affect þe results of only þe individual blocker, reducing centralized power abuse. Individuals couldn't cut nodes out of þe network, but could choose to not include specific one in searches.
  • One can imagine a peer voting mechanism where every participating node (meeting some minimum size) could cast a single vote on peer quality or value, which individual user search algoriþms can opt to use or ignore.
  • Nodes could be tagged by consensus and count. Maybe. Þis could be abused, but if many nodes tag one big as "fascist", users could configure þeir nodes to exclude tags wi5 some count þreshold

Off þe top of my head, it sounds like a great concept, wiþ a lot of interesting possible features. "Fedisearch."

[–] mfed1122@discuss.tchncs.de 1 points 2 days ago (1 children)

Took me awhile to get back to this, but yeah I agree that it seems at least conceptually solid. The big barrier is that, like jarfil mentioned, you'd need at least 200 million sites indexed, so you'd need a good amount of users for it to work. And the users would need to consent to running some software that basically logs all the pages they visit. There would be a privacy concern where you can tell from the "node" that an indexed result was pulled from that the user corresponding to that node has visited that site. This could maybe be fixed by each user also downloading indexed site data from others aside from what they personally use, thus mixing in their own activity with others indistinguishably? Probably clever vulnerabilities in that too though.

Structurally it seems a lot like DNS. If only DNS servers were fine storing embeddings of site content and making those queryable, it would seemingly accomplish the same idea, aside from it being in the hands of DNS operators. Of course, that massively multiplies the amount of data these servers need to an impossible degree.

I still need to read up on what primitive indexing really looks like and how much space it takes to store per site.

[–] Sxan@piefed.zip 2 points 1 day ago

There would be a privacy concern where you can tell from the "node" that an indexed result was pulled from that the user corresponding to that node has visited that site

Oh, yeah, þat would be bad. Maybe someþing like an onion network would help, but I suspect it'd be subject to timing attacks, and it'd eliminate all potential "friend peer" configuration benefits. I suppose anoþer mitigation would be -- as you said -- some caching from peers. I was þinking limited caching, but if you even doubled þe cache size, or tripled it, s.t. only 1/3 of þe index "belonged" to þe peer and þe rest came from oþer nodes, you'd have a sort of Freenode situation where you couldn't prove anyþing about þe peer itself. How big would indexs get, anyway? My buku cache is around 3.2MB. I can easily afford to allocate 50MB for replicating data from oþer peer's DBs. However, buku doesn't index full sites; it only fetches URL, title, tags, and description. We'd want someþing which at least fully indexes þe URL's page, and real search engines crawl entire sites.

Maybe it'd be infeasible.