How would you feel if you had a sheet of paper forcibly pulled through your body? Fickle things, but I'm forgiving to printers.
ink cartridge manufacturers however... 🤬
Precisely. A use case i thought would work great for nostr:nprofile1qqszak7w562dzerznp222fvrgk8adkt9k9s783yt2lf6luqegp2c3pqpzamhxue69uhhyetvv9ujumn0wd68ytnzv9hxgtcdpp3wz.
Devs might consider limiting the usage of "entire dataset" "entire network" because it can be confusing or misleading. To me it translates to "large subset of the network", but that's not universal.
Its actually quite rare that we ever have complete access to complete knowledge of the system. Nostr is aligned with reality in that sense. Follower counts should always have an asterisk with "within this social system" stamped on there for anything that uses follower counts. Its perfectly fine to have a partial view of the entire network. Responding to nostr:nprofile1qqsdulkdrc5hdf4dktl6taxmsxnasykghdnf32sqmnc7w6km2hhav3gppemhxue69uhkummn9ekx7mp0qywhwumn8ghj7mn0wd68ytfsxyhxymmvwshx7cnnv4e8vetj9uq3uamnwvaz7tmwdaehgu3dwp6kytnhv4kxcmmjv3jhytnwv46z7qmml2f
If you are grabbing embeddings from other users, there should be a link to the original model used for the embeddings. There are probably solutions for recovering the original text from the embedding and you can also just reembed the text yourself to compare the resulting vectors. But if you're grabbing enough of them that you're trying to make comparisons, you're giving some level of trust because at that level you might as well do the embeddings yourself. Additionally, if the embeddings are already being used in a recommendation system I'd imagine that they are there because they are useful and help increase the organization of the content - so I'd expect that there is less incentive to find embeddings that are maliciously inserted by a user.
Perhaps not specialized relays that do a few functions well, but an additional feature for existing relay architecture. It needs the database for effecient storage and retrieval, so a relay that expects embeddings as a feature should use this "module" to know what to do with these event kinds. Otherwise, block the event because you don't want to be holding embeddings unless you actually care about them.
with that, you can automatically embed events as they come in - or you can have an npub be an agent to compute embeddings as you need them. I'd expect embedding all kind1's flowing through a relay, or even a set of relays to be intensive, so you optimize in the quality of the content you embed. User's calling a bot for the job is definitely a signal of quality.
I'd naievly relate it to how some individuls don't hear in their dreams, they just see in images. The specific type of processing doesn't matter, so long as the resulting behavior is compatible.
No. Move closer toward order. Life is able to decrease entropy, manifesting an order that persists, and strives to continue, increasing more pockets of decreased entropy. The larger the system, the more order is needed.
The poison is the dose and frequency. Usage can bring higher doses and higher frequency. We all need to figure out the healthy place to sit in. For vices, zero can be quite healthy.
the formula is a short hand for a dynamical equation, like the ones from fractals,
`f(t+1) = a*f(t)`, "the next state is dependant on what happened in its previous state"
With `f(f)=f` its more mind bendy than that. f is a function, which takes itself as an input, and produces itself. For something to be a live (living as a process, alive as a state), it must come from something alive (if you subscribe to a specific definition of life). It itself is the origin.
Sparked from a section of the article.
https://i.nostr.build/qizoaN5EKD36CdLn.jpg
I don't think so, but it can help - for a clearnet relay, you need to leave tor to access it, and what can be traced is the last tor connection you made (exit node)
If you use tor relays for everything, you have no exit node to leave from, which makes it much more difficult to analyze.
but that's just my limited understanding
I get the hype aversion. I get the 'technology' aversion - but if you want to recognize handwriting in an image, you're not going to do it by explicitly building rules from the ground up, it just doesn't work that way.
"All" machine Learning" and AI (deep learning) is, is a defined process to minimize errors, and it uses numbers to do so. The specific numbers are essentially the rules that have been 'found' by the algorithm that minimizes error, which you don't necessicarily care about because you just want the right answer.
You could simpify all of AI and ML algorithms to playing advanced games of "hot and cold".
- You have a bunch of pixel values that represent text,
- You make a guess "this looks a lot like an L, an i, or 1, but it looks most likely to be a 7, so I'll say that this image is a 7"
- turns out the correct answer was "T" but they write like a chicken, so I'll have to make an adjustment for next time and hopefully be correct then.
Think you're not giving yourself enough credit, while simultaneously giving the data scientists too much credit in how well they understand the tools they build 😅
https://imgs.xkcd.com/comics/machine_learning_2x.png
Tool is they key word. So long as it can be used in a way that is meaningful to others, it will be used. I won't make any claims that AI can bring truth anymore than a pencil can bring truth, but it can be very useful. Helping us find the relative truths we're looking for. We're betting on the capacity LLMs have in helping us navigate the sea of information we're swimming in.
Notes by liminal | export