Blog

  • Building an AI-Powered Sound-Responsive Website

    At Drexel University’s Digital Media program, our students are pushing
    the boundaries of technology and creativity. In a recent DIGM Master’s
    course assignment Dheeraj Mantha explored how artificial intelligence (AI) can
    transform responsive web development. His project, integrating AI-driven sound
    classification with HTML, CSS, and JavaScript…

    At Drexel University’s Digital Media program, our students are pushing
    the boundaries of technology and creativity. In a recent DIGM Master’s
    course assignment Dheeraj Mantha explored how artificial intelligence (AI) can
    transform responsive web development. His project, integrating AI-driven sound
    classification with HTML, CSS, and JavaScript, showcases the exciting
    potential of combining no-code AI tools with web technologies.

    The Project: A Sound-Responsive Web Experience

    Dheeraj set out to answer a compelling question:
    How effectively can a
    sound-based AI model, trained with Google’s Teachable Machine,
    classify audio inputs like button clicks and keyboard typing, and reflect
    those predictions in real-time on a responsive website?
    With no prior coding
    experience in web development, he aimed to create a website that listens to
    real-world sounds—background noise, keyboard typing, or PS5 controller
    button presses—and responds by changing its background color and displaying
    the detected sound.

    Using a browser-based workflow, Dheeraj trained an AI model to recognize
    three distinct audio classes, exported it to TensorFlow.js, and integrated
    it into a webpage built with HTML, CSS, and JavaScript. The result? A
    dynamic interface that shifts colors in real-time: a neutral theme for
    background noise, a blue pulsing theme for keyboard typing, and a
    yellow-orange shaking theme for controller presses.

    Tools and Workflow

    Dheeraj leveraged three key tools to bring his vision to life:

    • Google Teachable Machine: This no-code platform allowed Dheeraj to
      train a sound classification model by recording audio samples directly in
      the browser. Its simplicity and TensorFlow.js export feature made it ideal
      for beginners.
    • TensorFlow.js: Running entirely in the browser, this library
      enabled real-time audio processing without complex backend setup, ensuring
      fast and responsive predictions.
    • Replit: As an online IDE, Replit provided a seamless environment
      for coding and testing the webpage, with live previews to streamline
      development.

    Despite his lack of coding experience, Dheeraj used ChatGPT to generate the
    necessary HTML, CSS, and JavaScript code, adapting it to integrate the AI
    model. He faced challenges, including bugs in Teachable Machine’s
    recording process and Replit’s preview functionality, but resolved
    them through persistence, browser restarts, and refined AI prompts.

    Challenges and Triumphs

    The journey wasn’t without hurdles. Teachable Machine occasionally
    froze during audio recording, requiring multiple retries. Replit’s
    preview button failed at times, forcing Dheeraj to test the website locally.
    Additionally, Chrome’s persistent microphone permission requests posed
    a usability issue. However, by refreshing tools, refining code with ChatGPT,
    and tweaking browser settings, Dheeraj overcame these obstacles to deliver a
    functional prototype.

    The website successfully differentiated between sounds, though the
    animations (pulsing and shaking effects) were less dynamic than hoped.
    Still, the color changes and real-time text updates provided clear visual
    feedback, making the interface engaging and interactive.

    Why It Matters

    Dheeraj’s project highlights the power of accessible AI tools in web
    development. For a beginner with no coding background, the ability to train
    an AI model and integrate it into a responsive webpage is a testament to the
    democratization of technology. His work also opens doors to real-world
    applications, such as:

    • Accessibility: Sound-based controls for websites, enabling interaction
      without touch or mouse input.
    • Interactive Installations: Dynamic interfaces for gaming or portfolio
      websites that respond to environmental sounds.
    • Education: Tools for classrooms to trigger UI actions based on ambient
      noise detection.

    Looking Ahead

    Dheeraj plans to enhance the project by adding more sound classes (like
    claps or voice commands), improving animations, and exploring Replit’s
    AI code generator. He also envisions logging user interactions to analyze
    sound patterns over time. These ambitions reflect the iterative spirit of
    Drexel’s Digital Media program, where students are encouraged to
    experiment and innovate.

    A Bright Future for AI in Web Development

    This project underscores how AI tools like Teachable Machine and
    TensorFlow.js empower students to create sophisticated, responsive web
    experiences without deep technical expertise. For Dheeraj, the process was a
    crash course in problem-solving, UI design, and the potential of AI-driven
    interactivity. His success inspires us all to explore the intersection of AI
    and web development.

    Stay tuned for more innovative projects from our Digital Media students as
    they redefine what’s possible!

  • Digital Media Alumni Spotlight: Steven McInerney

    Meet Steven McInerney, a 2024 graduate of Drexel University’s User Experience & Interaction Design (UXID) program. Steven’s passion for solving real-world problems through thoughtful design has guided his journey into the tech industry, where he currently works as a UI Designer for a fast-growing startup.

    Drexel Experience

    Steven’s favorite project at Drexel was Clink, a cocktail recommendation app that he developed with his senior project team. This project was a turning point in his academic journey, allowing him to fully immerse himself in the design process and realize his desire for a career in UX. Despite the long hours spent prototyping and refining user flows, Steven found the experience deeply rewarding, as it transformed his understanding of design from a creative interest to a professional pursuit.

    Career Path & Achievements

    Currently, Steven is a UI Designer at DoorSpot, a startup focused on helping people rent and find properties through intuitive digital dashboards. In this role, he designs key user flows and UI elements, applying the same skills he honed during his time in Drexel’s UXID program. Outside of work, Steven enjoys traveling abroad with friends, drawing inspiration from the diverse cultures and environments he encounters.

    Advice to Students

    Steven encourages students to try different roles in class projects, whether it’s design, research, or development. He believes that college and co-op experiences are the best times to explore new skills, as these early trials make adapting to professional environments much easier. His advice for aspiring designers? “Don’t shy away from trying different roles—this is your time to experiment and grow.”

    Connect with Steven: LinkedIn

  • DIGM 2025 Showcase

    The Senior Showcase is the electrifying culmination of a year’s relentless research, daring design, and cutting-edge development by Drexel’s Digital Media seniors. Teams unveil capstone projects that redefine digital innovation, from intuitive apps to immersive games. This event celebrates their vision and grit, launching the next generation of creators into industries hungry for bold ideas. Join us to witness the future, crafted today.

  • The Digital Peale Museum Project

    Charles Willson Peale was an 18th and early 19th century American portrait artist. In the late 1700’s, he began a museum of art, science, and technology in his Philadelphia home/studio on Lombard Street.

    The museum’s collection and popularity rapidly outgrew its space. In 1796, Peale moved his museum to the newly built Philosophical Hall, where it again rapidly outgrew its space. In 1801, he moved his museum into the recently vacated Pennsylvania State House, now known as Independence Hall, a World Cultural Heritage Site renowned for the signing of the Declaration of Independence and the US Constitution.

    Digital Media faculty members Glen Muschio and Dave Mauriello are mentoring students interested in digitally recreating the Long Room exhibits in Peale’s Museum for use at Independence National Historical Park.

    Charles Wilson Peale’s Philadelphia Museum

    The following video, titled “Charles Wilson Peale’s Philadelphia Museum” shows student work on the project.

    Who Tells What Stories to Whom, When and Where

    The following video, titled “Who Tells What Stories to Whom, When and Where”, provides background about the project and future plans.

    For additional information about the ongoing project contact Glen Muschio

  • Silent Emotion: Capturing Conflict in Character Animation

    Silent Emotion: Capturing Conflict in Character Animation

    In the ANFX Character Animation II course at Drexel University, students explore the art of human facial deformation and movement as it relates to thought-driven performance. It’s a class that challenges animators to go beyond technical execution, asking them to consider how emotion lives in the details—especially when there’s no dialogue to rely on.

    One final project from this term stands out as a stunning example of that challenge in action. The animation focuses on a heated argument between two women, and while it plays out in total silence, the emotional intensity is undeniable.

    Facial Animation as Performance

    What makes this animation truly memorable is the care the student put into the facial expressions. The performance doesn’t rely on exaggerated cartoonish motions—instead, it leans into realism. Small, subtle shifts in the eyebrows, eyelids, and jawline do the heavy lifting, capturing the nuance of frustration and anger with remarkable precision.

    Reading Emotion Without Sound

    Even in the absence of audio, the scene feels loud. You can almost hear the characters yelling, feel their tension rise, and sense the moment just before one interrupts the other. It’s a clear demonstration of how facial animation can drive narrative and emotion just as powerfully as voice acting—if not more so.

    Animating the Whole Character

    Beyond the facial detail, the student took full advantage of body language to support the storytelling. Weight shifts, posture changes, hand gestures—each element adds to the performance and deepens the emotional impact. Together, these physical cues bring the characters to life, giving them presence, intention, and urgency.

    The animation feels polished, smooth, and emotionally rich. It’s more than just a technical achievement—it’s a thoughtful character study that invites viewers into the story without needing a single line of dialogue. It’s this kind of work that showcases what’s possible when animation is used not just to move characters, but to make them feel alive.

  • MoodSense: Revolutionizing Mood Tracking with Smart Ring Technology

    In an era where mental health awareness is paramount, MoodSense emerges as an innovative solution developed by a student from Drexel University’s User Experience and Interaction Design (UXID) program. This Apple Watch interface, paired with smart ring technology, aims to accurately detect users’ moods and provide personalized activities for self-improvement.

    The Challenge: Beyond Novelty Mood Rings

    Traditional mood rings, while popular, are merely novelty items that change color based on body heat. MoodSense addresses this limitation by leveraging advanced smart ring technology to provide accurate mood detection based on comprehensive health data.

    Innovative Technology

    MoodSense utilizes PPG (Photoplethysmography) technology in smart rings to collect various health metrics from the skin’s surface, including:

    • Sleep patterns
    • Body temperature
    • Heart rate
    • Activity levels
    • Blood oxygen
    • Stress levels
    Design Process and Features

    The development of MoodSense followed a comprehensive process:

    • Competitor Research: Identified gaps in existing smart ring applications.
    • Wireframing: Created low-fidelity prototypes to plan screens and navigation.
    • Design Inspiration: Drew from Apple Watch design standards and mood ring aesthetics.
    • Style Guide: Developed a cohesive visual language for the app.
    • Final Design: Refined the interface based on critical user paths.

    Key Features

    MoodSense offers unique functionalities:

    • Accurate mood identification based on health data
    • Personalized activity suggestions for mood improvement
    • Social features to send encouragement to friends
    • Color and shape associations for different moods

    User Experience

    The final prototype, created using Figma, offers an interactive experience showcasing the app’s core functionalities. Users can explore mood detection, receive personalized activities, and engage with friends, all through an intuitive Apple Watch interface.

    MoodSense represents a significant step forward in mood tracking and mental health support. By combining cutting-edge smart ring technology with a user-friendly Apple Watch interface, it offers users a powerful tool to understand and improve their emotional well-being.

  • Specimania: Bring Hidden Insect Collection to Life

    Drexel University’s Digital Media & Virtual Production (DMVP) program recently showcased an innovative senior capstone project that pushes the boundaries of interactive museum experiences. The project, titled “Specimania,” was developed by a talented team of students known as Bug Byte Studios, demonstrating the program’s commitment to preparing students for real-world production environments.

    Specimania is an interactive exhibit designed to reveal the mysteries of the Academy of Natural Sciences’ hidden insect collection. The project combines cutting-edge technologies such as virtual reality, 3D modeling, and interactive displays to create an immersive and educational experience for museum visitors.

    Key Features and Technical Achievements

    The centerpiece of Specimania is a custom-designed kiosk that allows visitors to explore virtual specimens in various ways. This includes a holographic display showcasing high-fidelity 3D models of insects, an educational game where users can play as different bee species, and the ability to switch between virtual specimens and view them in different experiences. The team also created detailed 3D printed models of the insects, allowing visitors to observe intricate details up close.

    Technical Prowess

    The Bug Byte Studios team showcased impressive technical skills throughout the project, including photogrammetry and 3D modeling to create accurate digital representations of insects, advanced hair grooming techniques to replicate realistic insect textures, and implementation of scientifically accurate animations using retargeting in Unreal Engine.

    Real-World Impact and Future Developments

    Specimania was not just a theoretical exercise. The project was developed in collaboration with the Academy of Natural Sciences and was showcased during Earth Day weekend, receiving valuable feedback from real visitors. This real-world testing allowed the team to refine their project based on user interactions and responses.

    The success of Specimania has led to exciting developments for the DMVP program. A new class called “Digitizing Nature” will be offered in collaboration with the Academy of Natural Sciences, allowing future students to build upon the foundations laid by this project. This collaboration is set to grow, providing more opportunities for students to work on real-world projects and gain hands-on experience with cutting-edge technologies in museum and educational settings.

  • Titra Labs: Revolutionizing Chemistry Education Through Virtual Reality

    Drexel University’s Digital Media & Virtual Production (DMVP) program showcased an innovative senior capstone project in 2023 that pushed the boundaries of interactive chemistry education. The project, titled “Titra Labs,” was developed by Anthony Alcantia, demonstrating the program’s commitment to preparing students for real-world production environments and interdisciplinary applications.

    Titra Labs is a comprehensive chemistry titration simulation that brings laboratory experimentation into the virtual realm. This project showcased the potential of VR technology in science education, allowing users to weigh solids, pour solutions, and conduct titrations in a highly interactive and immersive environment.

    Bridging Science and Technology

    Anthony Alcantia, a DMVP student with a background in biochemistry, developed Titra Labs over nine months using Unreal Engine 5 for Oculus systems. The project exemplified the integration of digital technologies with scientific disciplines, demonstrating how virtual reality can enhance understanding of complex chemical processes.

    Key Features of Titra Labs

    Titra Labs offered a unique learning experience by combining the physical aspects of laboratory experiments with real-time analysis and calculations. Users could perform titrations, observe color changes indicating reaction endpoints, and receive immediate feedback on their experimental techniques. This hands-on approach in a virtual environment allowed for repeated practice without the constraints of physical lab resources.

    Impact on Chemistry Education

    The project aimed to make chemistry more accessible and engaging for students at various levels. By simulating real-world titration experiments, Titra Labs provided a safe, cost-effective, and interactive way for students to gain practical experience. This VR application could potentially supplement traditional laboratory work, especially in situations where access to physical labs is limited.

    Anthony’s work on Titra Labs demonstrated the DMVP program’s commitment to fostering innovative projects that have real-world applications. The project not only showcased technical proficiency in VR development but also highlighted the potential for cross-disciplinary collaboration between digital media and scientific fields.

  • Exploring New Worlds: Student-Created Environments in Unreal Engine 5

    The world-building final projects in the DMVP program showcase the incredible creativity and technical prowess of student teams. Each group crafted a unique virtual environment within Unreal Engine, combining advanced materials, procedural foliage, and dynamic lighting to bring their vision to life. Let’s explore the immersive worlds created by three standout teams.

    Smith, Burcksen, Weber, Solis – A Forgotten Monastery

    This team transported viewers to an ancient, abandoned monastery hidden deep within a misty valley. The environment is rich with overgrown foliage, cracked stonework, and the eerie ambiance of a place lost to time. The use of Unreal Engine 5’s Lumen lighting technology creates a dynamic interplay of light and shadow, enhancing the sense of mystery and depth in the scene.

    Technical Achievements

    The team leveraged Nanite geometry for highly detailed assets, ensuring that each pillar, staircase, and moss-covered statue held an impressive level of fidelity. Additionally, their automated foliage system added layers of natural growth to make the environment feel authentically aged and untouched.

    Valsma, Haw, Johnson – The Cyberpunk Hideout

    A stark contrast to the monastery, this team built a neon-lit cyberpunk hideout, pulsating with the energy of a futuristic underworld. The space is filled with holographic billboards, rain-soaked streets, and flickering neon lights, bringing a vibrant yet ominous atmosphere to the scene.

    Lighting and Atmosphere

    Using Unreal Engine 5’s real-time reflections, the team created hyper-realistic lighting effects, where neon signs reflected dynamically on wet surfaces. They also integrated animated elements such as moving signs and steam vents to add a sense of motion and life to the environment.

    Deeb, Eggler, Heller, Quint – The Deserted Space Colony

    This team took players to a desolate space colony on a distant planet. The barren landscape, abandoned structures, and a massive looming planet in the sky create a truly immersive sci-fi setting. The environment is detailed with scattered remnants of past inhabitants, telling a silent story of what once was.

    Procedural Generation and Detail

    A key highlight of this project was the use of procedural generation for rock formations and terrain sculpting, making the alien world feel vast and natural. The team also paid meticulous attention to the hero assets, including derelict space rovers and worn-down habitats, adding depth to the world’s history.

    These student projects highlight the power of Unreal Engine 5 and the incredible talent within the DMVP program. Each team brought a distinct vision to life, mastering industry-standard tools and workflows to craft stunning, explorable environments. Whether a forgotten monastery, a neon cyberpunk world, or an abandoned space colony, these projects demonstrate the boundless possibilities of real-time world-building.

  • Bringing Motion to Life: Student Projects in Motion Capture

    Motion capture technology has revolutionized the way digital performances are created, blending the precision of real-world movement with the limitless possibilities of digital animation. In this course, students explored industry-standard techniques for full-body human performance capture, refining their skills in data processing and integration into character animation and game engine applications.

    Group Final Projects: Crafting Digital Performances

    As part of their final project, student teams had the choice to either build a library of motion capture data or create a short animated scene using their own captured performances. Each project involved a meticulous process, from initial planning and choreography to motion capture sessions, data refinement, and final animation integration.

    Behind the Scenes

    A key component of the group projects was the process documentation video. These videos showcased the workflow behind capturing motion data, from actor performances in mocap suits to the cleaning and retargeting of motion data for use in animation software and game engines. Through these process breakdowns, students demonstrated their technical expertise and creative problem-solving skills.

    Individual Motion Capture Demo Reels

    In addition to the group projects, each student produced an individual demo reel, highlighting their personal workflow and expertise in motion capture. These reels featured key stages of the process, including the in-studio capture session, data editing and cleanup, and a sample of the final polished animation.

    These demo reels serve as a showcase of the students’ technical skills and understanding of motion data application, providing valuable portfolio pieces for careers in animation, virtual production, and game development.

    Through both group and individual projects, students gained hands-on experience with professional motion capture pipelines, developing the ability to translate human performances into compelling digital animations. This class not only sharpened their technical skills but also reinforced the importance of storytelling, precision, and collaboration in the world of motion capture.