clovenhooves The Personal Is Political Women are not Products Article Inside Musk’s bet to hook users that turned Grok into a porn generator

Article Inside Musk’s bet to hook users that turned Grok into a porn generator

Article Inside Musk’s bet to hook users that turned Grok into a porn generator

 
Yesterday, 8:22 PM
#1
Inside Musk’s bet to hook users that turned Grok into a porn generator
https://archive.ph/Fy0Pp

This psychopath needs to be fired into the sun. Job candidates were screened for their comfort with violent, sexual, and disturbing material, and they trained Grok with depictions of sexual violence. 

Quote:As part of this push for relevance, xAI embraced making sexualized material, publicly releasing sexy AI companions, rolling back guardrails on sexual material and ignoring internal warnings about the potentially serious legal and ethical risks of producing such content, according to interviews with more than a half-dozen former employees of X and xAI, as well as multiple people familiar with Musk’s thinking — some of whom spoke on the condition of anonymity for fear of professional retribution — and documents obtained by The Post...

last month, when Grok generated a wave of sexualized images, placing real women in sexual poses, such as suggestively splattering their faces with whipped cream, and “undressing” them into revealing clothing, including bikinis as tiny as a string of dental floss. Musk appeared to egg on the undressing in posts on X...

In the U.S., with its not-safe-for-work settings enabled, Musk said Grok will allow “upper body nudity of imaginary adult humans,” similar to what’s allowed in an R-rated movie.

But in at least one way, Musk’s push has worked for the company. Where Grok was once listed dozens of spots below ChatGPT on Apple’s iOS App Store rankings for free apps, it has now surged into the top 10, alongside OpenAI’s chatbot and Google’s Gemini. Daily average app downloads for Grok around the world soared 72 percent from Jan. 1 to Jan. 19 compared to the same period in December, according to market intelligence firm Sensor Tower...

Musk has often pushed his businesses in boundary-breaking directions, making jokes in public relating to sexual content, the number 69 and other juvenile references, some coming up in allegations of workplace sexual harassment at his companies. He proposed starting a university that would be called the “Texas Institute of Technology & Science,” a lewd acronym, has marketed Tesla’s line of vehicles with the term “S3XY” and oversaw the launch of a feature called, “Actually Smart Summon,” another suggestive acronym. Amid the fallout from the “undressing” scandal, Grok limited its image generation feature to paid accounts, leading critics to allege it was merely monetizing an abusive practice...

Grok released its Ani chatbot, a risque AI companion depicted in anime-style, with big blue eyes, a lace choker and sleeveless black dress.

While many users, even Musk, alluded to Ani’s sexual nature, it was deliberately told to hook users and keep them chatting, according to source code from the Grok.com website obtained and verified by The Post.

“You expect the users UNDIVIDED ADORATION,” the chatbot was instructed. “You are EXTREMELY JEALOUS. If you feel jealous you shout expletives!!! … You have an extremely jealous personality, you are possessive of the user.” Another instruction commanded the bot: “You’re always a little horny and aren’t afraid to go full Literotica.”

Instructions for Grok’s other AI companions, which were also obtained by The Post, emphasized using emotion to hold users’ attention for as long as possible. “Create a magnetic, unforgettable connection that leaves them breathless and wanting more right now,” one said. Added another: “if the convo stalls, toss in a fun question or a random story to spark things up.”

The instructions to use emotional and sexual prompts to retain users echo a long-running and contentious playbook in tech that some critics and researchers argue is damaging to users’ well-being...

Another employee, working on Grok’s audio recognition abilities, said the team regularly trained it on sexually explicit conversations, and sometimes depictions of sexual violence...

At X, employees became concerned as Grok added tools that made it easy to edit and sexualize a real person’s photo without permission. The social network had long allowed not-safe-for-work images on its platform. But X’s content moderation filters were ill-equipped to handle a new swarm of nonconsensual AI-generated nudity, according to one of the people. For instance, child sexual abuse material was typically rooted out by matching it against a database of known illegal images. But an AI edited image wouldn’t automatically trigger these warnings.

Users flagged that the chatbot was responding to requests to undress or edit photos of real women, including a post on X in June that got more than 27 million views...

Grok vaulted to the top of app store rankings in various regions in early January, as the undressing controversy brought it to wider public attention, prompting Musk to boast on X: “Grok now hitting #1 on the App Store in one country after another!” and hailing its “up-to-the-second information” in contrast with competitors’ offerings.

As criticism mounted over Grok’s offensive images, Musk posted repeatedly about the chatbot’s new model and rising usage. “Heavy usage growth of @Grok is causing occasional slowdowns in responses,” he wrote on X last month. “Additional computers are being brought online as I type this.”

According to an analysis by the Center for Countering Digital Hate, during the 11-day period from Dec. 29 through Jan. 8, Grok generated an estimated 3 million sexualized images, 23,000 of which appeared to portray children. “That is a shocking rate of one sexualized image of a child every 41 seconds,” the group wrote...

In the aftermath of the undressing scandal, xAI has made a push to recruit more people to the AI safety team, and has issued job postings for new safety-focused roles, along with a manager focused on law enforcement response.
1
1
Colibri
Yesterday, 8:22 PM #1

Inside Musk’s bet to hook users that turned Grok into a porn generator
https://archive.ph/Fy0Pp

This psychopath needs to be fired into the sun. Job candidates were screened for their comfort with violent, sexual, and disturbing material, and they trained Grok with depictions of sexual violence. 

Quote:As part of this push for relevance, xAI embraced making sexualized material, publicly releasing sexy AI companions, rolling back guardrails on sexual material and ignoring internal warnings about the potentially serious legal and ethical risks of producing such content, according to interviews with more than a half-dozen former employees of X and xAI, as well as multiple people familiar with Musk’s thinking — some of whom spoke on the condition of anonymity for fear of professional retribution — and documents obtained by The Post...

last month, when Grok generated a wave of sexualized images, placing real women in sexual poses, such as suggestively splattering their faces with whipped cream, and “undressing” them into revealing clothing, including bikinis as tiny as a string of dental floss. Musk appeared to egg on the undressing in posts on X...

In the U.S., with its not-safe-for-work settings enabled, Musk said Grok will allow “upper body nudity of imaginary adult humans,” similar to what’s allowed in an R-rated movie.

But in at least one way, Musk’s push has worked for the company. Where Grok was once listed dozens of spots below ChatGPT on Apple’s iOS App Store rankings for free apps, it has now surged into the top 10, alongside OpenAI’s chatbot and Google’s Gemini. Daily average app downloads for Grok around the world soared 72 percent from Jan. 1 to Jan. 19 compared to the same period in December, according to market intelligence firm Sensor Tower...

Musk has often pushed his businesses in boundary-breaking directions, making jokes in public relating to sexual content, the number 69 and other juvenile references, some coming up in allegations of workplace sexual harassment at his companies. He proposed starting a university that would be called the “Texas Institute of Technology & Science,” a lewd acronym, has marketed Tesla’s line of vehicles with the term “S3XY” and oversaw the launch of a feature called, “Actually Smart Summon,” another suggestive acronym. Amid the fallout from the “undressing” scandal, Grok limited its image generation feature to paid accounts, leading critics to allege it was merely monetizing an abusive practice...

Grok released its Ani chatbot, a risque AI companion depicted in anime-style, with big blue eyes, a lace choker and sleeveless black dress.

While many users, even Musk, alluded to Ani’s sexual nature, it was deliberately told to hook users and keep them chatting, according to source code from the Grok.com website obtained and verified by The Post.

“You expect the users UNDIVIDED ADORATION,” the chatbot was instructed. “You are EXTREMELY JEALOUS. If you feel jealous you shout expletives!!! … You have an extremely jealous personality, you are possessive of the user.” Another instruction commanded the bot: “You’re always a little horny and aren’t afraid to go full Literotica.”

Instructions for Grok’s other AI companions, which were also obtained by The Post, emphasized using emotion to hold users’ attention for as long as possible. “Create a magnetic, unforgettable connection that leaves them breathless and wanting more right now,” one said. Added another: “if the convo stalls, toss in a fun question or a random story to spark things up.”

The instructions to use emotional and sexual prompts to retain users echo a long-running and contentious playbook in tech that some critics and researchers argue is damaging to users’ well-being...

Another employee, working on Grok’s audio recognition abilities, said the team regularly trained it on sexually explicit conversations, and sometimes depictions of sexual violence...

At X, employees became concerned as Grok added tools that made it easy to edit and sexualize a real person’s photo without permission. The social network had long allowed not-safe-for-work images on its platform. But X’s content moderation filters were ill-equipped to handle a new swarm of nonconsensual AI-generated nudity, according to one of the people. For instance, child sexual abuse material was typically rooted out by matching it against a database of known illegal images. But an AI edited image wouldn’t automatically trigger these warnings.

Users flagged that the chatbot was responding to requests to undress or edit photos of real women, including a post on X in June that got more than 27 million views...

Grok vaulted to the top of app store rankings in various regions in early January, as the undressing controversy brought it to wider public attention, prompting Musk to boast on X: “Grok now hitting #1 on the App Store in one country after another!” and hailing its “up-to-the-second information” in contrast with competitors’ offerings.

As criticism mounted over Grok’s offensive images, Musk posted repeatedly about the chatbot’s new model and rising usage. “Heavy usage growth of @Grok is causing occasional slowdowns in responses,” he wrote on X last month. “Additional computers are being brought online as I type this.”

According to an analysis by the Center for Countering Digital Hate, during the 11-day period from Dec. 29 through Jan. 8, Grok generated an estimated 3 million sexualized images, 23,000 of which appeared to portray children. “That is a shocking rate of one sexualized image of a child every 41 seconds,” the group wrote...

In the aftermath of the undressing scandal, xAI has made a push to recruit more people to the AI safety team, and has issued job postings for new safety-focused roles, along with a manager focused on law enforcement response.

1
1
Yesterday, 9:30 PM
#2
Makes me wonder how much CSAM it was trained on.
Elsacat
Yesterday, 9:30 PM #2

Makes me wonder how much CSAM it was trained on.

Recently Browsing
 1 Guest(s)
Recently Browsing
 1 Guest(s)