Oddbean new post about | logout
 Technically it is. But I don't know if the OS will like it. We will see.  
 It's just like mining in the GPU.  
 I'm not sure mobile GPUs do general purpose kernels, but I'm not an expert 🤷🏾‍♂️.

Also GPU computations tend to like working on array-like data. Is handling Nostr events similar?  
 I used to convert any dataset into a texture and then run a shader ok the GPU to convert the texture into pixels and then take those pixels back to the CPU. 

In nostr, I just need to create a texture with all the 1000 events we receive every second and call verify to run them all into the graphics pipeline and get the White (verified) or Black pixels in the at the end. 

This is how we did it before CUDA became a thing. It's not a good general purpose procedure, but when you have so many things to run in parallel, it could be worth it.  
 If I understood it correctly, essentially you encode into a rendering problem, compute, and decode back. I'm not aware of this technique. If you have a link, would love to read about it.

The encode decode overhead is the main bottleneck, but it could work, specially considering today's mobile SoCs are pretty powerful. One thing to consider would be how uniform is the API from one device to the next. 
 This is batshit insane 😯😯😯