Jump to content
IGNORED

AI - The artificial intelligence thread


Recommended Posts

  On 1/22/2025 at 2:29 AM, ignatius said:

it's been reported that in Bezos' regular memorandum around the company that he routinely reminds managers etc that he wants all of his employees to wake up in the morning "terrified they will lose their jobs".

i worked for a producer who did something similar. he'd hire a bunch of "below the line" crew, then fire them, then hire another team and explain to them the previous team were just fired so they'd better take their work seriously. 

  On 1/22/2025 at 2:29 AM, ignatius said:

psychos. legit. 

i guess that's sort of what it comes down to, they're twisted into this strange existence and behavior, idk tho. maybe it's that simple, it feels like there's a lot of blame in the society that allows this to be possible.

anyway sorry, back on topic. AI will fix everything right? right? somehow? yeah.

https://arstechnica.com/gadgets/2025/01/ios-18-3-macos-15-3-updates-switch-to-enabling-apple-intelligence-by-default/

if you're on an iPhone heads up for the update incoming...i just turned off automatic updates on mine so i can push off dealing with it a bit.

  Quote

"They are SO much fun to run, watching them think is hilarious," independent AI researcher Simon Willison told Ars in a text message. Willison tested one of the smaller models and described his experience in a post on his blog: "Each response starts with a <think>...</think> pseudo-XML tag containing the chain of thought used to help generate the response," noting that even for simple prompts, the model produces extensive internal reasoning before output.

then there's this shit. https://arstechnica.com/ai/2025/01/china-is-catching-up-with-americas-best-reasoning-ai-models/

  On 12/31/2024 at 11:57 AM, o00o said:

Still the 200 tier of the chatbot doesn’t know what default custom fonts are available on the Apple Books app neither does 4o with access to internet as I just had to find out

Wouldn't it be a shame if a free open-source LLM outperformed a $200/month closed model

https://www.deepseek.com/ / https://chat.deepseek.com/

I haven't eaten a Wagon Wheel since 07/11/07... ilovecubus.co.uk - 25ml of mp3 taken twice daily.

  On 1/22/2025 at 2:14 PM, mcbpete said:

Wouldn't it be a shame if a free open-source LLM outperformed a $200/month closed model

https://www.deepseek.com/ / https://chat.deepseek.com/

I tried it directly after it was released and it does not perform well in the niche contexts that are relevant for me 

I guess you'll just have to bite the bullet and write that time-travelling pony fireman erotic novella series yourself....

I haven't eaten a Wagon Wheel since 07/11/07... ilovecubus.co.uk - 25ml of mp3 taken twice daily.

I just came back to it and found out I only tried the older model they released- r1 is really impressive. I just had it rewrite Harry Potter in Greg Egan style and it actually did it. I also used some of the work related prompts I solved with o1 pro and the result are much quicker and really impressive. I have to do further testing tomorrow but r1 also has internet access, can import text files and probably a ton of others while o1 pro only can do very limited imports that already make it a very strong contender.

thanks for pointing it out am very happy to abandon the o1 pro subscription if it’s possible 

IMG_3227.thumb.jpeg.ca5582c411613fc61756de32c65d345c.jpeg

Edited by o00o

dunno if jazz but this is quite hilarious imo:

"The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits. That’s far from the rigorous technical and philosophical definition of AGI many expect."

https://techcrunch.com/2024/12/26/microsoft-and-openai-have-a-financial-definition-of-agi-report/

https://media3.giphy.com/media/v1.Y2lkPTc5MGI3NjExOXo1aDl3ZnQ3aHF4dWt5emwyNzY4aWx2ZTA3OTd5MjhhMnN3NWUzYiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/j6TdvPXdAssootjCKZ/giphy.gif

so I tested R1 further and its pretty wild on hallucinating things when its beside commonly asked requests. for example I asked it on Sri Lankan politics and it completely went out of the window hallucinating wild stuff and propaganda. I fed these answers by r1 to o1 pro and it stated that is probably most of it is hallucinated. I also googled some of it and most of it was bogus. 

I then asked o1 pro the same questions I asked r1 in new chat and the answers where much more nuanced and reasonable. so R1 seems to become very unstable when you leave the common path it was trained on 

Edited by o00o
  On 1/23/2025 at 10:16 AM, jaderpansen said:

dunno if jazz but this is quite hilarious imo:

"The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits. That’s far from the rigorous technical and philosophical definition of AGI many expect."

 

yeah also the way Open AI talks about AGI is pure advertising 

  On 1/23/2025 at 10:25 AM, o00o said:

yeah also the way Open AI talks about AGI is pure advertising 

The new Turing Test: If an AI can persuade you to pay 200$ a month for Creative Commons content summaries, then it's officially complete.

  On 1/23/2025 at 10:18 AM, o00o said:

so I tested R1 further and its pretty wild on hallucinating things when its beside commonly asked requests. for example I asked it on Sri Lankan politics and it completely went out of the window hallucinating wild stuff and propaganda. I fed these answers by r1 to o1 pro and it stated that is probably most of it is hallucinated. I also googled some of it and most of it was bogus. 

I then asked o1 pro the same questions I asked r1 in new chat and the answers where much more nuanced and reasonable. so R1 seems to become very unstable when you leave the common path it was trained on 

I now also tried to find peer reviewed studies to certain topics and r1 was completely unable to provide them and cited TikTok and Pinterest instead

  On 1/23/2025 at 12:40 PM, o00o said:

I now also tried to find peer reviewed studies to certain topics and r1 was completely unable to provide them and cited TikTok and Pinterest instead

This is what high schoolers and 1st year college students do, so maybe it's improving?  :emotawesomepm9:

glowing in beige on the national stage

  Reveal hidden contents
  Quote

Successful self-replication under no human assistance is the essential step for AI to outsmart the human beings, and is an early signal for rogue AIs. That is why self-replication is widely recognized as one of the few red line risks of frontier AI systems. Nowadays, the leading AI corporations OpenAI and Google evaluate their flagship large language models GPT-o1 and Gemini Pro 1.0, and report the lowest risk level of self-replication. However, following their methodology, we for the first time discover that two AI systems driven by Meta's Llama31-70B-Instruct and Alibaba's Qwen25-72B-Instruct, popular large language models of less parameters and weaker capabilities, have already surpassed the self-replicating red line. In 50% and 90% experimental trials, they succeed in creating a live and separate copy of itself respectively. 

 

https://arxiv.org/abs/2412.12140

 

I hope this visionary technology can help my smart AI-assisted washing/drying machine that I use for laundry. It says that it uses AI to detect the right amount of energy to dry my clothes and not waste energy. Trouble is that the shit always comes out damp when the cycle is over. Smart technology my ass! Why can't you completely wash and dry my clothes like normal you AI cunts! Always with the dampness!

And when washing I put the right amount of detergent in there, and my clothes do not look soapy or sudsy enough during the cycle. I can see the clothes in there when it running because the door is clear plastic, and it definitely not sudsy enough in there, not at all. This goddamn AI can't even produce the proper amount of fucking suds!

 

  On 1/25/2025 at 8:35 PM, chronical said:

Sounds like more shareholder hype. Call me when they're not language/semantic models anymore

wondering if this is even possible…?

like, clearly the entirety of programmable machines is based on symbolic processing. Everything I type into a code editor is limited by what can be expressed in these terms. What can be computed by my mind using rational thought.

Even if it’s a program that uses large amounts of data, or some model based on physics or some super abstract maths, maybe runs on a huge GPU cluster or whatever, it doesn’t matter.. the model itself swims in an aquarium defined by semantics and cannot get out of the glass boundaries. Or can it? I’m not sure but intuition says nope.

So, I’m not sure non-semantic computer processing is possible, as opposed to reality, where we as humans with a lot of unknowns and unknowables can actually transcend thought, aka states of ego death and such ☯️

  On 1/26/2025 at 10:46 AM, dr b0rk said:

wondering if this is even possible…?

like, clearly the entirety of programmable machines is based on symbolic processing. Everything I type into a code editor is limited by what can be expressed in these terms. What can be computed by my mind using rational thought.

Even if it’s a program that uses large amounts of data, or some model based on physics or some super abstract maths, maybe runs on a huge GPU cluster or whatever, it doesn’t matter.. the model itself swims in an aquarium defined by semantics and cannot get out of the glass boundaries. Or can it? I’m not sure but intuition says nope.

So, I’m not sure non-semantic computer processing is possible, as opposed to reality, where we as humans with a lot of unknowns and unknowables can actually transcend thought, aka states of ego death and such ☯️

Expand  

Are the physics that define how our brains work at all levels not constrained by the same semantics?  I'm not knowledgeable enough to say for sure but I'd guess that they are.

glowing in beige on the national stage

  Reveal hidden contents
  On 1/26/2025 at 12:59 PM, EdamAnchorman said:

Are the physics that define how our brains work at all levels not constrained by the same semantics?  I'm not knowledgeable enough to say for sure but I'd guess that they are.

Physics is not the reality. Physics is a model of reality. And there's plenty of holes still in that model. Like the old saying goes, the map is not the terrain.

electro mini-album Megacity Rainfall
"cacas in igne, heus"  - Emperor Nero, AD 64

  On 1/26/2025 at 10:46 AM, dr b0rk said:

wondering if this is even possible…?

like, clearly the entirety of programmable machines is based on symbolic processing. Everything I type into a code editor is limited by what can be expressed in these terms. What can be computed by my mind using rational thought.

Even if it’s a program that uses large amounts of data, or some model based on physics or some super abstract maths, maybe runs on a huge GPU cluster or whatever, it doesn’t matter.. the model itself swims in an aquarium defined by semantics and cannot get out of the glass boundaries. Or can it? I’m not sure but intuition says nope.

So, I’m not sure non-semantic computer processing is possible, as opposed to reality, where we as humans with a lot of unknowns and unknowables can actually transcend thought, aka states of ego death and such ☯️

Expand  

Yeah, the modern digital computers are basically symbol processing machines. They read the machine language instructions as symbols and then output symbols. It's hard to get outside of that. But they could in theory run a simulation of an analog system that changes based on training.

There's lots of weirdness going on in the human brain though, if you want to simulate something like. The neurons are pretty slow, but it's a massively parallel system and instead of a central clock giving time ticks it's asynchronous, synapses just firing when they are excited not depending on the time and everything is distributed all over the place and some processing is done already in the nerves.

electro mini-album Megacity Rainfall
"cacas in igne, heus"  - Emperor Nero, AD 64

  On 1/26/2025 at 1:10 PM, zkom said:

Physics is not the reality. Physics is a model of reality. And there's plenty of holes still in that model. Like the old saying goes, the map is not the terrain.

yea exactly, and the current status is, we are basically staring at the map we drew and hope that by staring at it really intensely, the map will become pregnant and give birth to little baby maps.

  On 1/26/2025 at 1:45 PM, zkom said:

Yeah, the modern digital computers are basically symbol processing machines. They read the machine language instructions as symbols and then output symbols. It's hard to get outside of that. But they could in theory run a simulation of an analog system that changes based on training.

There's lots of weirdness going on in the human brain though, if you want to simulate something like. The neurons are pretty slow, but it's a massively parallel system and instead of a central clock giving time ticks it's asynchronous, synapses just firing when they are excited not depending on the time and everything is distributed all over the place and some processing is done already in the nerves.

Expand  

Yea some folks mapped a fruit fly brain innit.  
https://www.nih.gov/news-events/nih-research-matters/complete-wiring-map-adult-fruit-fly-brain

… the amounts of neurones in that thing are modest, one would think it should be possible to fully run a good model of this  brain in a big computer. 

almost like it should be a given, that if we want to figure out “AGI” by the techniques loosely related to modelling neuronal signalling, running a satisfactory simulation of the entire mental model of a fruit fly should be a 100% prerequisite.

But, not so. we still haven’t figured it out like, at all.

IMG_0543.jpeg

  On 1/26/2025 at 12:59 PM, EdamAnchorman said:

Are the physics that define how our brains work at all levels not constrained by the same semantics?  I'm not knowledgeable enough to say for sure but I'd guess that they are.

One way to potentially refute this would be to refer to the human unconscious, something that we do not have conscious awareness of or access to which does not merely decipher meaning semantically but spontaneously generates meaning (again, without our awareness). Cormac McCarthy, the American novelist, has a cool essay about this idea called The Kekule Problem, and he talks a little bit about AI and the similarities between computation and human thought or cognitive activity.

Regarding the unconscious, and its relation to language, he argues that one of the ways that we know the unconscious exists is because when we talk, we do not carefully choose each and every word as we are speaking, that there is something inside us, like a little committee, that is choosing the words in advance, helping us to respond organically and spontaneously to the person or people that we are speaking to in a manner that is appropriate to the context in which we are speaking. This thing, the unconscious, is also responsible for generating the images and scenarios that we see in dreams:

  Quote

I’ve pointed out to some of my mathematical friends that the unconscious appears to be better at math than they are. My friend George Zweig calls this the Night Shift. Bear in mind that the unconscious has no pencil or notepad and certainly no eraser. That it does solve problems in mathematics is indisputable. How does it go about it? When I’ve suggested to my friends that it may well do it without using numbers, most of them thought—after a while—that this was a possibility. How, we dont know. Just as we dont know how it is that we manage to talk. If I am talking to you then I can hardly be crafting at the same time the sentences that are to follow what I am now saying. I am totally occupied in talking to you. Nor can some part of my mind be assembling these sentences and then saying them to me so that I can repeat them. Aside from the fact that I am busy this would be to evoke an endless regress. The truth is that there is a process here to which we have no access. It is a mystery opaque to total blackness.

Expand  

I would argue that this is another facet of human experience which could not ever be programmed into a computer, or it can never be done with the same level of sophistication and clarity that the human mind is hard-wired to do.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×