Z4rK

joined 2 years ago
[–] Z4rK@lemmy.world 0 points 2 years ago (1 children)

That’s fair, but you are misunderstanding the technology if you’re bashing the AI from Apple for making macOS less secure. Most likely, it will be just as secure as for example their password functionality, although we don’t have details yet. You either trust the OS or not.

Microsoft Recall was designed so badly, there’s no hope for it.

[–] Z4rK@lemmy.world 1 points 2 years ago (3 children)

macOS and Windows could already be doing this today behind your back regardless of any new AI technology. Don’t use an OS you don’t trust.

[–] Z4rK@lemmy.world -1 points 2 years ago

That’s why it’s on the OS-level. For example, for text, it seems to work in any text app that uses the standard text input api, which Apple controls.

User activates the “AI overlay” on the OS, not in the app, OS reads selected text from App and sends text suggestions back.

The App is (possibly) unaware that AI has been used / activated, and has not received any user information.

Of course, if you don’t trust the OS, don’t use this. And I’m 100% speculating here based on what we saw for the macOS demo.

[–] Z4rK@lemmy.world 0 points 2 years ago (7 children)

How so? Many people want to use AI in privacy, but it’s too hard for most people to set it up for themselves currently.

Having AI tools on the OS level so you can use it in almost any app and that is guaranteed to be processed on device in privacy will be very useful if done right.

[–] Z4rK@lemmy.world -1 points 2 years ago

I mean, that’s fair, if you don’t believe in his integrity than this news have very little value to you.

-2
submitted 2 years ago* (last edited 2 years ago) by Z4rK@lemmy.world to c/technology@lemmy.world
 

Actually, really liked the Apple Intelligence announcement. It must be a very exciting time at Apple as they layer AI on top of the entire OS. A few of the major themes.

Step 1 Multimodal I/O. Enable text/audio/image/video capability, both read and write. These are the native human APIs, so to speak.

Step 2 Agentic. Allow all parts of the OS and apps to inter-operate via "function calling"; kernel process LLM that can schedule and coordinate work across them given user queries.

Step 3 Frictionless. Fully integrate these features in a highly frictionless, fast, "always on", and contextual way. No going around copy pasting information, prompt engineering, or etc. Adapt the UI accordingly.

Step 4 Initiative. Don't perform a task given a prompt, anticipate the prompt, suggest, initiate.

Step 5 Delegation hierarchy. Move as much intelligence as you can on device (Apple Silicon very helpful and well-suited), but allow optional dispatch of work to cloud.

Step 6 Modularity. Allow the OS to access and support an entire and growing ecosystem of LLMs (e.g. ChatGPT announcement).

Step 7 Privacy. <3

We're quickly heading into a world where you can open up your phone and just say stuff. It talks back and it knows you. And it just works. Super exciting and as a user, quite looking forward to it.

https://x.com/karpathy/status/1800242310116262150

 

Hi everyone! We're incredibly excited to announce that we're launching a beta of Finamp's redesign today. This is a major update to the app, and we're looking for feedback from anyone willing to try it out before we roll it out to everyone.

The beta is a work-in-progress, there are several new features already, but we will be adding more features over time.

Looks very nice!

[–] Z4rK@lemmy.world 1 points 2 years ago (2 children)

It’s really cool, but the example doesn’t produce any sensible output? If you have created something like this, why wouldn’t you have your demo output something sensible like Fibonacci or 1337 or whatever.