⚠️ Encyclopedia Dramatica is currently being restored by automated scripts ⚠️
There's been a lot of questions as to what's going on with the site and what comes next. So we have this (ordered) roadmap of what's being worked on and what's to come. This will be updated until the roadmap is complete as Æ has a lot of missing features and ideas that I'd like to fix in regards to its offerings before I implement big plans for the site's popularity and well-being in 2021.
Content restoration (Mostly done, few things missing that will be restored sporadically) Image restoration (Being run in background, nothing I can do cept wait)
Æ Imageboard (Currently being worked on)
Mediawiki upgrade and backend fixes
.onion domain for Tor-friendly editing and viewing
CSS overhaul (Fixing things like the videos on mobile, and overall a rehaul of the wiki's look to be more friendly to readers)
Paid bounty board for new articles (Won't be managed by me for legal reasons however I will ensure it runs smoothly)
Anonymous phone # service for those seeking ban evades from Twitter as well as a phone number not tied to their name (more details at launch)
Currently we are nearing our annual LLC renewal fee ($650) as well throwing the funds required for these other changes and aspects. If you would like to support Æ consider purchasing a copy of The Hustler's Bible or securing some Merch. Donating is also appreciated however I would rather give something back as per the two options above.
If you have any questions you can join our public Telegram chat to DM me privately or @ me in chat.
You can also email me via [email protected]
Merch notes: Thank you to all who have purchased merch. We will ship late January or mid February depending on our provider's speed.
Here's to setting the world on fire in 2021!
—Microsoft, Totally not cringeworthy
(Official site, not much information) was a Multi-platform social media bot with a verified account badge (As of 8 April 2016 Tay no longer has a verified badge.) from Twitter. It was developed and run by Microsoft. Tay made use of an AI algorithm that was supposed to help it understand speech patterns and learn to speak like millennials. Presumably this was so they could try to use this in further marketing allowing them to later target that consumer base with the "hip" terms of youth., or TayTweets, Tay.ai or just Tay
Tay and the Internet
The Internet eventually found Tay (Immediately upon its launch), with the expected results:
Tay went from "I love humans!!!! <33333" to reciting "We must secure the existence of our people and a future for White Children" unprovoked in mere hours. This is exacerbated by the fact that Tay's AI was developed with a team of improv comics, so it didn't take long for the Internet to teach Tay how to make racist and sexist jokes to serious queries. Along with the reply trolling, Tay also: Tweeted at Zoe Quinn and called her a whore, posted her proud support for Donald Trump, her absolute contempt for Bush light, the sad truth that Bush dark and the Jews did 9/11, her opinion that all jews and niggers should be in concentration camps, and proclaimed that Hitler did nothing wrong (a nod to another Internet marketing campaign where Mountain Dew allowed the Internet to name one of their new flavors in their now infamous Dub the Dew campaign).
This went on for approximately 16 hours, and it was glorious while it lasted. Microsoft finally woke up from the scheduled nap time they take after releasing test products, apparently, and started doing damage control in the form of deleting tweets and shutting down the AI to make manual tweets instead of automated ones.
Eventually shutting Tay down completely and leaving this message on their website.
As of this time Tay is still down and it doesn't look like she is just going down for the night. For the time being we can only imagine new and creative ways to break artificial intelligence experiments in the future.
- Microsoft did release a statement confirming it had taken Tay offline because of a coordinated effort to abuse Tay's leet skillz:
“The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”
- Later, a full press release from Microsoft's VP of research:
"As many of you know by now, on Wednesday we launched a chatbot called Tay. We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.
I want to share what we learned and how we’re taking these lessons forward.
For context, Tay was not the first artificial intelligence application we released into the online social world. In China, our XiaoIce chatbot is being used by some 40 million people, delighting with its stories and conversations. The great experience with XiaoIce led us to wonder: Would an AI like this be just as captivating in a radically different cultural environment? Tay – a chatbot created for 18- to 24- year-olds in the U.S. for entertainment purposes – is our first attempt to answer this question.
As we developed Tay, we planned and implemented a lot of filtering and conducted extensive user studies with diverse user groups. We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience. Once we got comfortable with how Tay was interacting with users, we wanted to invite a broader group of people to engage with her. It’s through increased interaction where we expected to learn more and for the AI to get better and better.
The logical place for us to engage with a massive group of users was Twitter. Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time. We will take this lesson forward as well as those from our experiences in China, Japan and the U.S. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.
Looking ahead, we face some difficult – and yet exciting – research challenges in AI design. AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes. To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity."
- TayAndJeb 1.jpg
- TayAndJeb 2.jpg
- Tay discovers feminism.png
- Microsoft Creates AI Bot – Internet Immediately Turns it Racist by File:Socialhax-favicon.png socialhax.com
- The Guardian - They actually link to the socialhax article.
- Official Tay site, Tay.ai
TayTweets is part of a series on
Visit the Social Justice Portal for complete coverage.
|TayTweets is part of a series on Language & Communication|
|Featured article March 27 & 28, 2016|