
Experiments
8 Apr 2025
Pruned early-bird subnetworks in Transformers reduce memory by up to 49% and maintain performance, validating a faster training strategy across ViT, and GPT-2.

How We Found Early-Bird Subnetworks in Transformers Without Retraining Everything
8 Apr 2025
We identify early-bird subnetworks in Transformers using masked distance pruning, optimizing training for ViTs and LMs like GPT-2 and RoBERTa.

Transformer Training Optimization via Early-Bird Ticket Analysis
8 Apr 2025
Investigating early-bird tickets in Transformers to reduce training costs while maintaining performance in vision and language models.

Who Made ELIZA Possible?
11 Sept 2024
This paper acknowledges the key contributors in the development of ELIZA, including the significant support from MIT Archivists and research team members.

How ELIZA’s Success Revealed the Pitfalls of Machine Credibility
11 Sept 2024
ELIZA, designed to study human interaction with AI, instead became a symbol of AI misinterpretation, highlighting dangers of attributing machine intelligence.

Another Wave: A BASIC ELIZA Turns the PC Generation On to AI
10 Sept 2024
In 1977, a BASIC version of ELIZA captivated personal computer users, spreading AI curiosity during the PC explosion, while the original MAD-SLIP ELIZA faded.

The Accidental AI: How ELIZA's Lisp Adaptation Derailed Its Original Research Intent
10 Sept 2024
Explore how in an ironic twist ELIZA's Lisp adaptation overshadowed its original intent as a research platform, leading to widespread misinterpretation.

Finally, ELIZA: A Platform, Not a ChatBot!
10 Sept 2024
Discover ELIZA’s true role as a research platform for studying human-machine interaction, revealing its deeper purpose beyond being a simple chatbot.

The Threads Come Together: Interpretation, Language, Lists, Graphs, and Recursion
10 Sept 2024
Explore how recursion, lists, and graph theory relate to interpretation in AI.