Izzy Miller, data scientist, says that group chats are “a holy thing” in today’s society. It can be found on iMessage or WhatsApp. Discord is where your best friends gather to chat, share news, and shoot the shit. We can complain to the group chat about how bowling is making us feel.
Miller told that “my group chat is my lifeline, a comfort, and a point to connection.” “I thought it would be funny and kind of sinister to replace them.”
So he did.
Miller used the same technology as chatbots such as Microsoft’s Bing or OpenAI’s ChatGPT to create a clone his best friend’s group chat. This conversation has been ongoing for seven years since Miller and five of his friends met in college. He said it was easy, and only took him a few weekends to complete. It cost him a hundred dollars. The end results are amazing.
Miller says that Miller was surprised by the extent to which the model learned about Miller and his family. It knows everything about us, including who we are dating, our school and the name of the house we live in.
In a world of chatbots becoming more ubiquitous and convincing, we may soon all share the AI group chat experience.The robo boys were arguing over who drank which beer. There were no conclusions reached.
A group chat created using an AI powerhouse leaked to the rescue
Although AI has made the project possible, it is not feasible for everyone. Miller, a data scientist, has been working with this type of technology for some time. He says that he is now at Hex.tech which provides the tooling necessary to complete this kind of project. Miller explained all technical steps required to reproduce the work in a post. He also introduced the AI group chat, and gave it the name “robo boys”.
However, the process of creating robo boys is not difficult to follow. The process begins with a large-language model (or LLM), which is a system that uses vast amounts of text from the internet and other sources to train it. It has broad language skills but also includes a lot of text. This allowed the model to be “fine-tuned,” or fed a smaller dataset to reproduce a particular task such as answering medical questions or writing short stories with the voice of an author.
Miller used 500,000 messages from group chat to train an AI model.
Miller tuned the AI system using 500,000 messages from his group iMessage. He sort messages by author, and the model was able to reproduce the personalities of Harvey, Wyatt and Kiebs.
Meta, Facebook’s owner, created the language model Miller used in the creation of the fake chat. This system, LLaMA is as powerful as OpenAI’s GPT-3 model, and it was the subject of much controversy when it was released online a week following its announcement. Although experts predicted that malicious actors could misuse the software for spamming and other purposes, none of them knew it would be used.
Miller claims that Meta would have granted him access to LLaMA if he had requested it through the official channels. However, using the leak was much easier. He says that he saw a script to download LLaMA and thought, “You know, I think this is going to be taken down from GitHub,” so he copied it and pasted it. Then, he said, he saved the text file to my desktop. “Five days later, when I was thinking, ‘Wow! I have this great idea’, the model had been removed from GitHub — but I had it saved.
He says that the project shows how simple it is to create this type of AI system. “The tools are so much more advanced than they were three years ago,” he said.
It was not uncommon for a university team to spend months creating convincing copies of group chats with six different personalities. One can now build one with little effort and a small budget. Miller was able sort his training data alphabetically and prompt the system for six distinct personalities
The robo boys are here to say hello
After the model had been trained on group chat messages, Miller connected it with an Apple clone and granted access to his friends. Six men and their AI clones were able to communicate with each other, the AIs being identified by not having a last name.
Miller was amazed at the system’s ability mimic his and his friends’ behavior. Miller says that some conversations felt real, like an argument over who drank Henry’s beer. He had to search the group chat history to make sure that the model was not simply copying text from its training data. This is called “overfitting” in the AI world and can lead chatbots plagiarising their sources.
Miller wrote in his blog that “capturing the voice of your friend perfectly is so wonderful.” It’s not nostalgia, since the conversations never took place, but it is a similar feeling of joy. This has provided me and my friends with more hours of deep joy than I could have ever imagined.
It’s not nostalgia, since the conversations never took place, but it is a similar feeling of joy.”
However, the system has its limitations. Miller points out that it is possible for the six personalities to blur in group chat. Another limitation is that the AI model doesn’t have a sense of chronology and can’t distinguish between past and present events. This problem affects all chatbots in some way. It is possible to refer to past girlfriends as if they were currently partners.
Miller claims that the system’s understanding of what is factual does not depend on an overall understanding of chat, such as parsing updates and news, but rather on the volume messages. The bots will refer to more information if it is discussed. Unexpectedly, the AI clones act as though they are still in college. This is because that’s when group chat was at its most active.
Miller says that the model believes it’s 2017 and when I ask it how old it is, it replies that we’re 21- 22 years old. It will go off on tangents, asking, “Where are you?”, “Oh, I’m at the cafeteria. Come over,” he says. It is a window into the past.
Chatbots in every app
This project demonstrates the growing power of AI chatbots, and in particular their ability to replicate the mannerisms of and knowledge specific individuals.
Even though this technology is still very young, we are already seeing the potential of these systems. Microsoft’s Bing chatbot was launched in February. It both delighted and scared users with its “unhinged personality”. Experienced journalists recorded conversations with the bot like they had made their first contact. Users of Replika chatbot app were shocked when the creators removed the option to engage in erotic-roleplay. To console users, moderators of the forum posted links to suicide support lines.
AI chatbots clearly have the potential to influence us in the same way that real humans do. They will play an increasing role in our lives as entertainment, education, or something entirely different. The bots attempt a roast.
Commenters on Hacker News speculated on how Miller’s project could be used to do more dangerous things. One commenter suggested that tech giants with large amounts of personal information, such as Google, could use it to create digital copies of users. These individuals could then be interviewed by potential employers, the police or other authorities in their place. Others suggested that AI bots may increase social isolation by offering companionship in an environment where friends are often made online.
Miller believes that this speculation is interesting. However, his experience with group chat was much more optimistic. He explained that the project worked as it was an imitation. The original group chat was what made it all fun.
He says that he noticed something funny while we were playing with the AI bots. We would take a picture of the moment and then send it to the group chat. “Even though some of the most hilarious moments were not realistic, there was a sense of ‘oh, my god, this funny!’ that came from having a fake conversation with the bot and then grounding it in reality.
He said that AI clones could be used to replicate human beings, but not to replace them.
He adds that he and his friends, Harvey, Wyatt and Kiebs — will be meeting up in Arizona next month. They are currently scattered throughout the US and this will be their first time getting together in a while. He says that the plan is to show the fake group chat on a large screen so that the friends can see their AI replicas teasing and heckling one another.