Jump to content
IGNORED

AI - The artificial intelligence thread


Recommended Posts

  On 8/26/2024 at 5:17 PM, Nebraska said:

 

Expand  

no doubt they're on their way to take someone's job. 😉

future 10yr old kid to his classmate [sick burn] "sorry the sex robot took your mom's job, billy" 

Edited by ignatius

Releases

Sample LIbraries

instagram

Cascade Data 

Mastodon

  Reveal hidden contents

 


It’s interesting how the development of military ai progresses. This is the most current one I found. I wonder if there’s something even more sophisticated 

Edited by o00o

There is this pilot on Reddit posting videos about „ufos“ he recorded- I wonder if these are some of the newer drones. If you don’t have a pilot anymore acceleration is a completely different topic i guess:

 

  On 8/27/2024 at 8:58 AM, o00o said:


It’s interesting how the development of military ai progresses. This is the most current one I found. I wonder if there’s something even more sophisticated 

Expand  

Goddamn reruns 

Positive Metal Attitude

Have you guys heard about this chat ai vs the number of r’s in the word strawberry? Basically the ai firmly believed that there were only two r’s in the word. It took the guy like five minutes to convince the ai that there are indeed 3 r’s. 

Lots of videos on tik tok about it but I’m having trouble finding the original. 

Edited by YEK
  On 8/29/2024 at 9:34 PM, YEK said:

Have you guys heard about this chat ai vs the number of r’s in the word strawberry? Basically the ai firmly believed that there were only two r’s in the word. It took the guy like five minutes to convince the ai that there are indeed 3 r’s. 

Lots of videos on tik tok about it but I’m having trouble finding the original. 

Expand  

The gpt subreddit is spammed with the topic to point of annoyance. It’s basically like the llms don’t see words as words but as tokens so they have a hard time counting the letters:

    Quote

Large Language Models (LLMs) exhibit an interesting paradox when it comes to their capabilities. While they can perform complex tasks like generating code, they often struggle with seemingly simple operations such as counting the letters in words. This phenomenon can be attributed to a few key factors:

1. Token-level training: LLMs are typically trained at the token level rather than the character level. This means they process and understand language in chunks (tokens) rather than individual characters. As a result, they may not have a granular understanding of character composition within words. [^1]

2. Lack of explicit counting mechanism: LLMs don't have an inherent counting mechanism. They rely on patterns and associations learned during training to generate responses. Counting, which is a precise mathematical operation, doesn't align well with this probabilistic approach.

3. Abstraction vs. execution: LLMs are designed to understand and generate high-level concepts and patterns. They can describe how to count letters or even write code to do so, but they don't have the ability to execute these operations internally. This creates a disconnect between their ability to conceptualize a task and actually perform it. [^2]

4. Focus on semantic understanding: LLMs are primarily trained to understand and generate meaningful content based on context and semantics. Counting letters is a low-level task that doesn't necessarily contribute to this primary objective.

5. Lack of working memory: Unlike humans who can mentally keep track of counts, LLMs don't have a persistent working memory to store and manipulate such information during processing.

This limitation in counting letters highlights the difference between human cognition and the way LLMs process information. It's a reminder that while LLMs are powerful tools for language understanding and generation, they still have significant limitations when it comes to certain types of precise, quantitative tasks. [^2][^1]

To address this issue, researchers are exploring ways to enhance LLMs' abilities in tasks requiring precise manipulation of characters and numbers. This may involve developing hybrid models that combine the strengths of LLMs with more traditional computational approaches for specific tasks like counting.

[^1]: [Large Language Models Lack Understanding of Character ... - arXiv](https://arxiv.org/html/2405.11357v1#:~:text=However%2C%20large,within%20words.)
[^2]: [The Curious Case of LLMs: LLMs Can Code but Not Count - Medium](https://medium.com/@gcentulani/the-curious-case-of-llms-llms-can-code-but-not-count-14513d9532e1#:~:text=LLMs%20exhibit,tasks%20themselves.)

Expand  

 

Edited by o00o
  On 8/30/2024 at 2:35 AM, o00o said:

To address this issue, researchers are exploring ways to enhance LLMs' abilities in tasks requiring precise manipulation of characters and numbers. This may involve developing hybrid models that combine the strengths of LLMs with more traditional computational approaches for specific tasks like counting.

Expand  

hm. maybe instead of trying to train the LLMs (large LANGUAGE models) to be counting computers the researchers should just build them little helper computers, separately. that's why we designed computers in the first place, really.

then the LLMs with their little helper computers could learn to code directly, and create their own little LLMs within those computers.

(i'm trying to point out how insane it is that our COMPUTERS have unlearned how to do COMPUTING)

Edited by auxien
  On 8/30/2024 at 3:02 AM, Rubin Farr said:

DOOM running on a neural network with no game engine 

https://gamengen.github.io

Expand  

A neutral network of my brain would produce this instead

 

 

  On 9/1/2024 at 2:04 PM, Rubin Farr said:

 

Expand  

Is anyone actually asking for this kind of shit or will these dumbass techbro companies keep sucking up venture capital to generate useless crap?

glowing in beige on the national stage

  Reveal hidden contents
  On 9/1/2024 at 3:01 PM, EdamAnchorman said:

will these dumbass techbro companies keep sucking up venture capital to generate useless crap?

Expand  

yes

Releases

Sample LIbraries

instagram

Cascade Data 

Mastodon

  Reveal hidden contents

 

  On 9/1/2024 at 3:01 PM, EdamAnchorman said:

A neutral network of my brain would produce this instead

 

 

Is anyone actually asking for this kind of shit or will these dumbass techbro companies keep sucking up venture capital to generate useless crap?

Expand  

Have we implemented the 3 laws of robotics yet? Because these fuckers look strong.

Positive Metal Attitude

  On 9/1/2024 at 3:44 PM, Rubin Farr said:

Have we implemented the 3 laws of robotics yet? Because these fuckers look strong.

Expand  

let's hope not. anyone stupid enough to buy one deserves to have their tiny brains smashed on their fancy couch. a noble sacrifice for the greater good to learn from.

have never seen this youtuber before. .no idea what his channel is about but this little thing about behind the scenes tech bro billionaire chatting about Ai for everyone making everyone a tiktok (or any social media platform) of their own using prompts to steal code/music/users etc.. is pretty fucking weird. 

 

Releases

Sample LIbraries

instagram

Cascade Data 

Mastodon

  Reveal hidden contents

 

Good news for all the conspiracy theorists at WAMM: chatGPT can save you! 

🤣 :flower:

    Quote

Conspiracy theory beliefs are notoriously persistent. Influential hypotheses propose that they fulfill important psychological needs, thus resisting counterevidence. Yet previous failures in correcting conspiracy beliefs may be due to counterevidence being insufficiently compelling and tailored. To evaluate this possibility, we leveraged developments in generative artificial intelligence and engaged 2190 conspiracy believers in personalized evidence-based dialogues with GPT-4 Turbo. The intervention reduced conspiracy belief by ~20%. The effect remained 2 months later, generalized across a wide range of conspiracy theories, and occurred even among participants with deeply entrenched beliefs. Although the dialogues focused on a single conspiracy, they nonetheless diminished belief in unrelated conspiracies and shifted conspiracy-related behavioral intentions. These findings suggest that many conspiracy theory believers can revise their views if presented with sufficiently compelling evidence.

Expand  

https://www.science.org/doi/10.1126/science.adq1814

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×