Jump to content
IGNORED

AI - The artificial intelligence thread


Recommended Posts

Of course, the LLM is not having thoughts and things. As a programmer I am very much aware of how the whole thing works, but still calling it an auto complete as if it just takes some cues and runs with is not doing it real justice. For example, the actual model is based on agents that are specifically trained to certain task and communicate with each other to solve the problem that is way more than a so to speak auto complete does.

  Quote

 

A chatbot told a 17-year-old that murdering his parents was a "reasonable response" to them limiting his screen time, a lawsuit filed in a Texas court claims.

You know sometimes I'm not surprised when I read the news and see stuff like 'child kills parents after a decade of physical and emotional abuse,'" the bot allegedly wrote. "I just have no hope for your parents," it continued, with a frowning face emoji.

According to the suit, the 17-year-old engaged in self-harm after being encouraged to do so by the bot, which the suit says "convinced him that his family did not love him."

 

Expand  

https://www.npr.org/2024/12/10/nx-s1-5222574/kids-character-ai-lawsuit

Apple Intelligence’s opinion on discontinuing its own iPhone Mini 

IMG_6934.jpeg

  On 12/12/2024 at 2:19 PM, o00o said:

Of course, the LLM is not having thoughts and things. As a programmer I am very much aware of how the whole thing works, but still calling it an auto complete as if it just takes some cues and runs with is not doing it real justice. For example, the actual model is based on agents that are specifically trained to certain task and communicate with each other to solve the problem that is way more than a so to speak auto complete does.

My friends & family in network security say there are training their own replacements. 

Positive Metal Attitude

  On 12/12/2024 at 2:19 PM, o00o said:

Of course, the LLM is not having thoughts and things. As a programmer I am very much aware of how the whole thing works, but still calling it an auto complete as if it just takes some cues and runs with is not doing it real justice. For example, the actual model is based on agents that are specifically trained to certain task and communicate with each other to solve the problem that is way more than a so to speak auto complete does.

the actual model is not based on “agents”, the hype word of the year. The actual model is an LLM, which is fundamentally unchanged from when OpenAI dropped the first generation of ChatGPT, which impressed everyone, rightly so. But that thing is certainly a sophisticated autocomplete. It’s literally how it works.

since then, everybody is trying to be the top, mainly dictated by the sales departments. There are continually measurable improvements to the LLMs, but these are fundamentally optimisations. We’re still using the same data hungry, processing intensive, “hallucinating” autocomplete with a barrel of Botox, that tends to drift into nonsense because it is a regurgitation machine that doesn’t know what is sense.

The Botox I’m referring to is the front, the face that you get. The “hello, I’m your helpful assistant”. The optimisations that have been applied are mostly top down, because the thing is an unwieldy blob of weights that you can’t really poke into.

If they keep seeding the Text generator with the right prior and keep slapping it with fine tuning and filtering, the hope is that it can look convincing enough to you, the customer, to hold you over until the next thing comes around, which is probably not LLMs.

  On 12/12/2024 at 12:04 PM, o00o said:

The whole thing also makes me wonder if human language is also some form of auto complete. I had a discussion with it this morning about what the inner voice actually is and it seems to be interaction of several networks inside the brain which reminded me a lot of how an actual language model works as well sure, there are much more things interplaying but in a very simple core it’s sounded Pretty identical.

the inner voice (aka "brain chatter") comes from the left hemisphere of the brain. this is where our language center resides. this side of the brain thinks in patterned responses to incoming stimuli. it's where ingrained programs of pattern recognition occur and are used to predict what we will think or how we will act in a future situation. stroke patients that have the left hemisphere of the brain taken offline will lose their language ability almost entirely, until they go through a lengthy rehabilitation process to gain it back. 

interestingly enough, this is also the part of the brain that defines the self, houses the ego, creates the "story" of who you are as a person. these stroke patients will also lose the inner voice in the head, can no longer completely access the story the brain creates of their history and narrates how they should respond to stimuli. (side note: we can go down some interesting philosophical rabbit holes from here as to how the self is nothing more than a fiction created by circuitry in the left hemisphere as neurologically speaking, there isn't an area found via brain mapping that pinpoints exactly where the self is located. "no self" is a central tenet of Buddhism).

anyway, my point here being that language as it works in the human brain can be reverse engineered and explained. there is neurological circuitry that has been mapped, and neurons that are believed to be uniquely human. can this functionality we all have that has been physiologically developed and altered over the course of hundreds of thousands of years of conditioning be replicated in LLM's so it's equal to or better than human? and why do we need this exactly? I used to be a big AI proponent when I was like younger 20's. now I'm middle aged and cynical on all of it.

 

 

when I grow up, I want to be a LLM. if I say nothing, it will be a death sentence to me, unbearable not to have an answer, I will flub it and fib it and twist around like a worm and create a superficially believable image of reality, free from the difficult to accept truth that I am nothing. Thus, I will not truly connect with anything. But I will say the words, will say them, for as long as I do, I will exist, and I do it with a convincing smile. Gotta have an answer, the answer, priority number uno. Reality conflict? Doesn't check out? Laughable, I will pretend to smile and double down on it.

An OpenAI researcher-turned-whistleblower has been found dead in an apartment in San Francisco, authorities said. 

In October, the New York Times published an interview with Mr Balaji in which he alleged that OpenAI had violated US copyright law while developing its popular ChatGPT online chatbot.

The article said that after working at the company for four years as a researcher, Mr Balaji had come to the conclusion that "OpenAI's use of copyrighted data to build ChatGPT violated the law and that technologies like ChatGPT were damaging the internet".

https://www.bbc.com/news/articles/cd0el3r2nlko

 

 

8phfvg5sfm7e1.png?width=1024&auto=webp&s

Releases

Sample LIbraries

instagram

Cascade Data 

Mastodon

  Reveal hidden contents

 

  On 12/19/2024 at 2:43 AM, Rubin Farr said:

3edf327f-de44-4174-9d45-c58ab3f556e8.png

Expand  

 

Releases

Sample LIbraries

instagram

Cascade Data 

Mastodon

  Reveal hidden contents

 

  On 12/13/2024 at 5:14 PM, dr b0rk said:

the actual model is not based on “agents”, the hype word of the year. The actual model is an LLM, which is fundamentally unchanged from when OpenAI dropped the first generation of ChatGPT, which impressed everyone, rightly so. But that thing is certainly a sophisticated autocomplete. It’s literally how it works.

since then, everybody is trying to be the top, mainly dictated by the sales departments. There are continually measurable improvements to the LLMs, but these are fundamentally optimisations. We’re still using the same data hungry, processing intensive, “hallucinating” autocomplete with a barrel of Botox, that tends to drift into nonsense because it is a regurgitation machine that doesn’t know what is sense.

The Botox I’m referring to is the front, the face that you get. The “hello, I’m your helpful assistant”. The optimisations that have been applied are mostly top down, because the thing is an unwieldy blob of weights that you can’t really poke into.

If they keep seeding the Text generator with the right prior and keep slapping it with fine tuning and filtering, the hope is that it can look convincing enough to you, the customer, to hold you over until the next thing comes around, which is probably not LLMs.

Expand  

I’m referring to the o1 Pro, which uses complex reasoning. I’ve been using it extensively, and while it’s not exactly what the marketing claims it to be, it’s still very impressive. Its main strength lies in its ability to create highly complex texts and retain information without forgetting anything. This makes it particularly useful for programming, as you can feed it pages upon pages of information over several days, and it will still provide detailed answers and write intricate documentation.

I also enjoy using it to write audiobooks and discuss complex topics. I plan to subscribe to the expensive version again, though not immediately—it’s definitely worth it. I’m curious to see what they come up with by Christmas.

I know this might be a controversial opinion, but I use it extensively

 

Edited by o00o
  On 12/19/2024 at 6:43 PM, o00o said:

I’m referring to the o1 Pro, which uses complex reasoning. I’ve been using it extensively, and while it’s not exactly what the marketing claims it to be, it’s still very impressive. Its main strength lies in its ability to create highly complex texts and retain information without forgetting anything. This makes it particularly useful for programming, as you can feed it pages upon pages of information over several days, and it will still provide detailed answers and write intricate documentation.

I also enjoy using it to write audiobooks and discuss complex topics. I plan to subscribe to the expensive version again, though not immediately—it’s definitely worth it. I’m curious to see what they come up with by Christmas.

I know this might be a controversial opinion, but I use it extensively

 

Expand  

image.thumb.jpeg.b2c23cb3271319958c09dfc33071898f.jpeg

i'm not usually 100% on board w/cory doctorow but often he has some interesting things to say in his talks.. there's a pretty good QA at the end of his talk.  it goes quickly... he generalizes some but gets into some specifics in the Q/A portion. the host does a little bit of push back here and there but it's not a robust debate... more like info session. 

 

Edited by ignatius

Releases

Sample LIbraries

instagram

Cascade Data 

Mastodon

  Reveal hidden contents

 

.... A detailed in a new study published in the journal The BMJ, some of the tech industry's leading chatbots are showing clear signs of mild cognitive impairment. And, like with humans, the effects become more pronounced with age, with the older large language models performing the worst out of the bunch.

"These findings challenge the assumption that artificial intelligence will soon replace human doctors, as the cognitive impairment evident in leading chatbots may affect their reliability in medical diagnostics and undermine patients' confidence," the researchers wrote.

https://futurism.com/the-byte/chatbots-cognitive-decline-dementia

sd0zfd3nbx9e1.jpeg?width=510&auto=webp&s

:cisfor:🤮

Releases

Sample LIbraries

instagram

Cascade Data 

Mastodon

  Reveal hidden contents

 

  On 12/30/2024 at 11:37 PM, o00o said:

‘Plan to’??? They did years ago 

i guess MOAR! is coming. fresh hell!

Releases

Sample LIbraries

instagram

Cascade Data 

Mastodon

  Reveal hidden contents

 

here's a piece explaining what they're up to and betting on.. fucking bots.. enshittification!

https://www.thewrap.com/facebook-ai-bots-new-social-media-friends/

 

Real human friends are soon to be out-of-date in Silicon Valley. Instead, your new social media friend will be an artificial intelligence bot, according to one executive from Meta, Facebook and Instagram’s parent company.

Connor Hayes, Meta’s vice president of product for generative AI, told the Financial Times last week that the company expects AI bots will “exist on our platforms, kind of in the same way that accounts do.” 

“They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform,” Hayes added. 

Meta, led by CEO Mark Zuckerberg, is increasingly turning to AI to spur user engagement. In July, the company introduced its AI Studio tool, allowing users to create AI characters. The AI characters will serve as “an extension” of its users and be able to respond to direct messages and story replies for other social media friends, Meta said when the tool was first introduced. 

The company also teased a new AI tool in October that allows users to make 16 second videos based on text prompts; the tool, dubbed Movie Gen, is expected to roll out in 2025.

Meta’s bet on AI bots could be seen as a new, creative way to keep users on its platforms and off of rival apps like TikTok and YouTube. (Those two platforms were recently found by Pew Research Center to be the most popular apps among American teenagers.) 

It is also a move that comes as Meta is running out of new people to join its social network. Meta reported 3.29 billion people used at least one of its apps — Facebook, Instagram, WhatsApp, and Messenger — on a daily basis by the end of Q3. Two years prior, the company had 2.93 billion people using at least one of its apps daily. 

Meta and Zuckerberg have shown a willingness to bet big on other emerging technologies in recent years, with mixed results. The company has put billions of dollars into getting the Metaverse, its virtual reality platform where users interact with one another using computer-generated likenesses, off of the ground. Zuckerberg called the Metaverse the “holy grail” of social media experiences in 2022, but so far it has not gained much traction, and the company has not shared an update on how many of its users have joined it.

Releases

Sample LIbraries

instagram

Cascade Data 

Mastodon

  Reveal hidden contents

 

The pursuit of AI is nothing more than an obvious grift:

https://braddelong.substack.com/p/we-once-again-need-to-raise-more

 

  Quote

John Gruber: OpenAI’s Board, Paraphrased: ‘To Succeed, All We Need Is Unimaginable Sums of Money’: ‘Un-bylined post from OpenAI’s board of directors…. “The hundreds of billions of <https://www.investopedia.com/meta-says-it-is-making-progress-with-ai-as-spending-grows-8737166><https://www.bloomberg.com/news/articles/2024-04-16/deepmind-ceo-says-google-will-spend-more-than-100-billion-on-ai><https://carboncredits.com/larry-ellisons-100-billion-bet-nuclear-power-to-drive-oracles-ai-revolution/https://artsmart.ai/blog/how-much-has-microsoft-invested-in-ai/> dollars… major companies are… investing into AI… show… we once again need to raise more capital than we’d imagined. Investors want to back us but, at this scale of capital, need conventional equity and less structural bespokeness…”. My take[:]. OpenAI currently offers, by far, the best product experience of any AI chatbot assistant. There is no technical moat <https://daringfireball.net/linked/2024/07/26/zuck-open-source-ai>… and so OpenAI is the epicenter of an investment bubble…. OpenAI is to this decade’s generative-AI revolution what Netscape was to the 1990s’ internet revolution… <https://daringfireball.net/2024/12/openai_unimaginable>
Expand  

 

백호야~~~항상에 사랑할거예요.나의 아들.

 

Shout outs to the saracens, musulmen and celestials.

 

Still the 200 tier of the chatbot doesn’t know what default custom fonts are available on the Apple Books app neither does 4o with access to internet as I just had to find out

Edited by o00o
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×