A friend was telephone scammed and was told by the #Apple store they was most likely hacked because they used #Chrome. They strongly encouraged them to switch to #Safari.
This seems very suspect to me. I appreciate many here don't like Chrome but is Safari magically more secure?
I'll be speaking at #FossBack today in Berlin at 14:30
The topic is "Does Size Matter?" for #OpenSource projects and #UX
Larger projects tend to handle UX tasks a bit better, is it impossible for smaller projects to do great UX? (spolier: yes)
Volkswagen to bring back physical controls to their cars
This is great news. As someone who has done a ton of #UX work on digital devices, I'm not totally anti-screen. However, there are a few critical controls, e.g. headlights or defroster, that should completely unambiguous, fast, and certainly NOT hidden like Tesla did last year with it's v11 update.
https://insideevs.com/news/701296/vw-physical-controls-to-return/
Every time I have to use #GIMP it's an exercise in frustration as it doesn't use many of the established image editing paradigms that have been standardized for decades. Today's lesson: Selecting a portion of a bitmap layer with the rectangle tool and then dragging the selection is.... odd.
n.b.: I know this is *possible* I'm just saying it's not *discoverable* by anyone that has used bitmap editing tools their entire 30 year career
@cf8c0364@90431d67@c4691757
Thank you! As a UX designer who thinks about coding-syntax-as-design, this feels much cleaner as the core use case, basic nesting, now can look much cleaner. The & character is there for more advanced cases.
@90431d67@cf8c0364@c4691757 Similar question. I had been reading that the & operator was there to that you could declare things like:
&:hover
&.foo
I'm *not* saying it should, just saying it *did* and now that seems to be taken away. Or am I misunderstanding this change?
@0ffd42b0 But that's kind of my entire point, if something as basic as simple nesting requires "reading the spec" to get it right, something is wrong.
CSS is overly agressive with brevity hacks which causes readability problems and bugs. It's optimizing for writers not readers.
Too many times I've stared at CSS thinking I was looking at a REGEX-style expression, it was far too baroque.
Easy to fix in this case, just use the & all the time. Will likely prevent a few bugs in the process
@0ffd42b0
Thanks! But what about
a {
b {}
}
That's the same as
a b {} right?
That seems counter to your example with c & a as the c is outside the scope but in my example b is inside the scope. Both don't use the &.
I'm saying only that not using the & is indeed shorter but leads to inconsistency, which makes the coffee harder to read, especially for novices
@e9777705 Yeah, you're not the only one that said this. My point was a bit bigger more like:
How will companies change when we don't need them for wifi as much?
(clearly there will always be dead spots, but for example, it may get to the point that I get a sim card in my laptop and just not care about wifi 95% of the time.
That feeling when your coffee shop #WiFi is 5% the speed of your phone's SIM card. (so why am I here?)
Seems pretty clear we're on the cusp of rarely needing Wifi, even at home. Of course, this is much more likely outside the US as both Europe and Asia have far more advanced mobile networks/pricing.
But this feels like a techtonic shift that will affect all sorts of social patterns (such as coffee shop camping)
@4af4148c The BDO sign up web page fires off every 'micro transaction scare warning' for me. As a general rule I never play micro transaction games. Am I over reacting in this case?
@b4acb571 the general work is now in the public domain. There is nothing stopping any other os from implementing something similar. The actual code for my particular prototype is not at all ready for public consumption. My hope is that this encourages new work to happen
So apparently #PWA apps are fully supported on #Safari for Mac #Sonoma This is great news!
But can someone explain why Safari can't do this on #iOS ?
#web
@90431d67 I must thank you yet again. But alas, CSS proves yet again that it eschews clear syntax for compactness.
The UX designer in me would say that div.foo is clear and so is div > .foo
However allowing a space is needlessly compact, creates ambiguity, and hurts readability. (he says VERY naively)
@90431d67 Thank you, that's very helpful. But this makes me realize my issue isn't with & but the fact that div .foo {...} is different from div.foo {...}
That's even MORE obscure!
BTW, I want to stress that I'm NOT a professional CSS author. Just someone that is trying to not drown when reading it.
#hottake CSS is too hard to read
As a #UX designer, I want code to have a clear, clean syntax that is easy to understand. I was excited that Nesting would improve how to write/read CSS but I'm horribly confused, e.g. when to use &. Here is the 'help text':
"Nesting classes without `&` will always result in descendant selectors. Use the `&` symbol to change that result"
This means nothing to me. I understand it's a complex problem but does the syntax need to be so baroque?
@716d58e8 The latest release (4.2) has added a few things that help, e.g. you can export lists and such so porting is a bit more bullet proof.
(I'm not saying it's easy! Just a bit better)
@716d58e8 Oh, and yeah, you CANT migrate content which drives me up the wall. This is especially frustrating as 4.2 just turned on 'allow public search to find my stuff' option so I can actually FIND my older posts now!
@716d58e8 The latest release (4.2) has added a few things that help, e.g. you can export lists and such so porting is a bit more bullet proof.
(I'm not saying it's easy! Just a bit better)
@7fe7dd72 I've just confirmed it's working! I searched "from:me <some word>" and it found a post from me 4 months ago with that word in it. So happy it's for all my posts!
@20a8cbf3 The majority of the code is in my prototype app written in a language called Processing. There were a few experiments moving it into Android but I don't believe it made it to AOSP
@b8c911e8 Something a lot more direct. e.g. the number isn't critical, the project isn't either. A good title sets a hook: What is this?
Here are two to just starters:
1. Open source Survivor Lessons
2. Hard won strategies from open source
@1f8d831d It's a big world out there, with a wide variety of people out there (much more than 'clickers' and 'keyboarders')
My point was to rethink the foundations of text editing, get the core targeting/selection goals down so EVERYONE can benefit. If you want to have something keyboard, focused, that's great! It can be layered on top. I was just starting with the basics and showing one path where users could become more proficient.
@12f6653a@7906d04d I would think there are two types of fine tuning: Domain tuning (giving it the right general skills/knowledge for my device) and Personal tuning (giving it the right skills/knowledge for *me). Obviously interrelated.
My point being a phone can do so many things a chat interface can't so tuning it to dial the phone, open apps, and generally do 'phone things' seems a domain goal.
However, tuning to my friends, word choice, preferences is very much a personal one.
@7906d04d@12f6653a thanks! The article discussed putting smaller versions of LLMs on a phone, that's a first start.
However getting one tuned, both to device tasks and to the user's life feels like the next big step. Having it running on a home server has even more potential
@18f2a82b@ca5977db I just want to back up Tyrone here. This is what I do, I picked a company that just turned it on for me (and setup HTTPS) I had to do my own domain name hacking though (configuring CNAME records) which isn't hard but it LOOKS hard)
I wouldn't recommend the company I chose but I assume there are many, some which I've been told even do the domain thing. Wordpress just auto updates to it's really easy once it's up.
@5d116069 This is fascinating, thank you for sharing. Even if it's 10 seconds today, that clearly will fall over time.
If we wanted an Alexa-like product, I like the idea of a local base model to handle all speech-to-text requests. Out of the box, it would likely be quite bad but if we could train it with a long series of home focused queries (in the form of docs?), that would be interesting. I wonder how hard it is to make it (or likely improve it) to handle a new topic?
@63063633@27e3ee9e
This is very exciting. I've been looking to volunteer my time to an OSS project and something in this area sounds very interesting as technology like this always has UX issues.
@63063633@27e3ee9e I'm familiar with TinyML, it can run on RPis but they are significantly smaller than the industrial ones running in the cloud.
I realize there may be no clear ways of measuring this yet but even using model size as a proxy, I've heard that ChatGPT 4 is in the terabytes.
Of course, it's not even clear we WANT something as comprehensive at that for many local tasks but I would expect even a reasonable language model is likely to be pretty large
@63063633@9589bde7@27e3ee9e
This is amazing thank you! I expected OSS versions but not so quickly. Do we have the terminology to discuss the differences in these models? For example, I can imagine a small LLM running locally but something like Bard or ChatGPT 4 would likely be significantly larger and more CPU intensive.
As I've already got "just use a grammar checker" replies, I should add that I don't ONLY want to do grammar checking, I'd like to do light rewrites as well. The problem is that any LLM I've tried goes much too far.
Here is an example, when I asked an LLM to 'clean up' my previous post, this is what I got back:
"I'm aware that LLMs can be powerful tools for editing and revising text. However, I'm also aware that they can sometimes be too eager to make changes. I'm looking for an LLM that can be used to make light rewrites without altering the overall meaning or tone of the text."
It's fine, but WAY too much
Can anyone share any 'text cleaning prompts' they use with #ChatGPT (or any other #LLM)?
Often when I paste in a paragraph (for an email, or a paper I'm writing) it massively rewrites it. I'd like a lighter hand, something that is more like a spelling/grammar checker.
What I'd REALLY like some type of 1-10 scale where 1 is just spelling, 2 is spelling+grammar all the way up to 10 which is 'do whatever you want'
Is that level of control possible?
As I've already got "just use a grammar checker" replies, I should add that I don't ONLY want to do grammar checking, I'd like to do light rewrites as well. The problem is that any LLM I've tried goes much too far.
@a60eb0ca The more personal data gets sucked into LLMs, the more the need for privacy (and likely local use) will grow. We're already seeing LLMs that will process my online documents and answer questions about my work. That level of personalization is only going to grow. But as that data is fairly slow changing my 'mostly local' LLM could easily add that type of personal information to it's corpus yet still stay both local and entirely in my control.
@a0f4d5b3 As I said above, I agree. I'm just saying that many of my queries are totally trivial (e.g. setting a timer, simple math, basic knowledge trivia) and there is a huge UX benefit to these working entirely locally.
There is also a privacy issue but that goes even further. My point isn't the local is BETTER than networked, but that it's possible AT ALL. That strikes me as at the very least interesting (and likely disruptive)
@4af4148c It's exactly the same issue with global warming, we can just curl up and wait for death, or roll up our sleeves and actually make the future happen that we want.
I'm not saying this is a given or even easy. I'm just saying the first step is knowing WHAT I want and make sure we agree. There is strength in numbers.
@a0f4d5b3 Oh, Icompletely agree, things like weather, traffic, news, will ALWAYS need to have access to the the cloud for up to date information. But there is a very large subset of tasks that will easily be handled locally. And as people become more concerned about privacy, the ability to run the majority of my queries locally will be much more important, with specific 'real time queries' going out to cloud services.
BTW, I'm making a technical point, there is a social/economic dimension as well
@4af4148c Two points: first, a 'greedy tech company' could still try to disrupt one of the incumbents by trying to go mostly local.
Second, what to say that in 10 years ONLY big tech companies are going to be running LLMs? I realize training an LLM is a HUGE effort, today. But they will likely get easier to create over time.
I hope it's clear I'm not disagreeing with you, you're instinct is correct. I'm just saying we can WANT a different result and work hard to make it happen.
It seems obvious to me that silicon will continue to improve to be faster with much more storage. But what does this mean?
Well for starters, Whatever #ChatGPT evolves into, will likely be able to run LOCALLY on the equivalent of my 'phone' in <10 years (or my "home server")
In fact, many things we assume to be 'cloud only' will be easily "local only" with this shift. This doesn't mean "cloud" is dead but it will be a huge shift. #UX #LLM #TECH
As a few replies have asked, I don't mean "never cloud" only that it's possible to run "mostly local" this seems transformative (and already in the direction that Siri and Google are headed)
Just a simple fall back, this would be very helpful. I'm just saying that as privacy concerns grown (and alternatives appear) the ability to be *mostly* local strikes me as a rather disruptive position.
Products like @cbff34d4 are well positioned to take advantage of this.
Notes by Scott Jenson | export