#introductions Hi nostr, I am Bob. I am starting a project called Nostrchive. I have the crazy idea to collect and archive as much nostr data as possible with the goal to pre-process (collect, collate, organize, and tokenize) data for (re)training FOSS nostr-aware LLMs for nostr search. Any strfry relay operators who might consider whitelisting my archive strfry relay for negentropy connections, please reach out. I would like to identify optimal batch sizes and connection windows in UTC. Thanks for your consideration.
Cool idea. Welcome to #Nostr🤙🏻.
Welcome to Nostr, Bob! I love your project, that is something Nostr really needs. Kind-1984s and their targets disappear so fast its hard to gather any kind of dataset...
Reconsider, Bob...
The NSA, FBI and GRU won't share access to their Nostr dataset. Neither will Raymon the botmaster. But Bob might. I'd prefer it if NOONE was cyberstalking nostriches, of course. But the world we live in...
Similar to bitcoin, nostr is an open book for all to see, which I am okay with. There are easy enough ways to do pseudonymity in these open systems if you need/want it, but as you point out the asymmetry of the three letter data hoard is a problem we can begin to address. If folks require absolute privacy, I am not sure nostr is the right protocol for them. I think we should at least have open datasets where it is possible. AI frameworks are open enough but the up-to-date data is not. The tech giants are really all data hoarders and they monetize those hoards selling to advertisers, ngos, governments, and private firms. Not really much access for plebs, which will reduce the impact of pleb-driven AI tech.
more of this we don't need a lot of archives, a few is enough, but it is very necessary the company i work for is going to be building a full text search for events also, though we are more focused on winning corporate customers to use nostr based infrastructure
Hi @mleku Just thinking here... Building search is really hard, and I am sure I am not the man for that job; however, I like to organize, analyze, and automate. I also have a large symmetric connection of which I can only really use 25% for IRL, so I thought to myself; what useful service can I create for nostr with my excess capacity? I am not sure I will be able to host a proper relay archive available to the public as managing a single large relay database would be unwieldly. It might be possible to host particular curations of the data as separate public relays, though. My main focus is segmenting and archiving the data. This seems achievable, manageable, and open to automation. I believe this will serve as a useful foundation for projects needing large nostr datasets for LLMs. I expect I can make this segmented data available as periodic updates to the public. Early stages, building out the garage data center off of local surplus in the Bay Area.