Projects

I’m currently working on AIwhere, a project I started out of my deep interest in telecommunications, AI, and space technology. AIwhere is still in its early stages, focusing on developing AI-native virtual space infrastructures that integrate computing, communications, and sensing into a unified system. The main goal is to enable autonomous operation of fully virtualized satellite making these capabilities available on demand through an O-RAN architecture. We’re also exploring how to integrate these systems with terrestrial cloud infrastructure, balancing lighter AI models on board with more complex processing on Earth using digital twins.

The idea for AIwhere grew out of my doctoral research at the University of Malaga, where I worked on AI-driven Mobile Network Management and next-generation cellular networks. It was recognized at the UMA Spinoff Awards 2023, which provided both validation and initial support. I’ve since built on this foundation, further shaping AIwhere through my work in deep tech, including participation in the BMoE program at Berkeley. Previously, I also won first prize at the IMFAHE Foundation’s inaugural Shark Tank competition with DemVir project using VR and eye tracking for early dementia detection jointly with Sara a researcher from University of la Laguna (Spain).

As a hacker at Sundai Club, I contribute to rapid AI apps prototyping and tech innovation. Sundai is a fast-paced community where builders, researchers, and entrepreneurs collaborate on fast-AI-prototyping with vibe coding. My work includes FeedFlip, Moodify, and Termsinator, exploring applications in content curation, social dynamics, and contract analysis. I also served as a TA for MIT’s IAP course 6.S093, mentoring students in AI-driven development.

The Neuriphonium: Brain Music

Started Jan 25, 2026. This is a side experiment. I’ve been working with an EEG Muse device for a self-funded research project, and during a Sundai music-themed hack I wondered what would happen if I tried to turn my own brain signals into sound.

The setup listens to real-time brainwave activity and maps shifts in focus, calm, energy, and relaxation into music (alpha, beta, delt, gamma waves) with the help of Lyria Google Model. There are no controls to play with. The only thing that changes is your mental state.

This is the way it works: When attention sharpens, the music tightens. When the mind drifts, it opens up. Kind of...

Video



Give a quick listen if you feel like
https://lnkd.in/erwTqHNJ
(Video Generated with OpenArt AI)

I’m not claiming this is the first time someone has explored brain-driven music, but this is the first composition that came out of this instrument and this particular setup. It felt honest enough to share.

PS: It reminds me of the early days of musique concrète and the first electronic music pioneers, when people like Pierre Schaeffer were just experimenting and listening. Exciting times, especially if you have nothing to be replaced by AI...



 

©Copyright with AI?

We need your consent to load the translations

We use a third-party service to translate the website content that may collect data about your activity. Please review the details in the privacy policy and accept the service to view the translations.