
Giacomo Vianello
Director, Machine Learning Engineer
Learn how computers process and understand image data, then harness the power of the latest Generative AI models to create new images.

Subscription · Monthly
17 skills
8 prerequisites
Prior to enrolling, you should have the following knowledge:
You will also need to be able to communicate fluently and professionally in written and spoken English.
Discover multimodal AI fundamentals and technologies, including models and use cases that process and generate text, images, audio, and video for richer, real-world applications.
Explore practical applications of multimodal AI by using APIs and open-source models for image captioning and audio transcription, with hands-on exercises and secure credential handling.
Explore how transformers unify text, images, audio, and video through attention, embeddings, and fusion strategies, powering state-of-the-art multimodal understanding and generation.
Explore practical tools for building multimodal AI apps, compare commercial and open-source options, and use Pydantic AI to create reliable, structured, vendor-agnostic workflows.
Explore enterprise visual content processing: core computer vision tasks, digital image representation, and real-world applications for efficiency, safety, and automation.
Explore vision data pipelines using HuggingFace, from dataset loading to resizing and normalization, with demos and hands-on exercises for effective image pre-processing.
Learn how embeddings convert images into compact vectors for efficient search, enable cross-modal tasks with models like CLIP, and power large-scale, robust computer vision systems.
Explore how to build text-to-image and image-to-image search using CLIP embeddings, combining theory, real-world demos, hands-on practice, and solution walkthroughs.
Explore multimodal vision APIs: prompt design, parameter tuning, structured outputs, cost control, integration, and best practices for robust, efficient image analysis.
Explore Gemini Vision API basics by practicing image moderation, learning to analyze images and implement moderation workflows using real-world examples and guided hands-on exercises.
Explore Vision Transformer models: core architecture, image tokenization, self- and cross-attention, and top models (SAM, RT-DETR, DINOv2) for segmentation, detection, and enterprise use.
Explore vision transformers with hands-on demos: extract image embeddings using DINOv2 and perform object detection and segmentation using RT-DETR and SAM2.1 models.
Learn how vision-language models align images and text for tasks like search, captioning, and VQA, with focus on architectures, applications, data needs, and deploying for enterprise use.
Explore zero-shot image classification and auto-labeling for driving scenes using CLIP, enabling efficient, scalable multimodal vision applications.
Explore how diffusion models generate images by reversing noise through iterative denoising, inspired by physical diffusion processes and key to modern generative AI developments.
Discover enterprise audio processing, core speech tasks (transcription, diarization, sentiment, TTS), key use cases, and strategies for value and integration in modern businesses.
Explore how audio is digitized for AI: sample rate, bit depth, channels, formats, and mel spectrograms for speech, plus challenges and best practices in audio preprocessing and analysis.
Explore audio processing with librosa: load, resample, convert, and analyze audio files; visualize with mel spectrograms and apply techniques through hands-on exercises.
Explore audio embeddings for efficient sound classification and retrieval, using models like CLAP to enable semantic search and robust text-based audio analysis at scale.
Explore using CLAP for sound retrieval, similarity, and zero-shot classification, then apply these skills to detect fan on/off states in real audio data.
Discover automatic speech recognition with Whisper: a robust, multilingual, open-source model for accurate transcription, translation, and speech processing in real-world audio.
Explore real-world speech transcription and translation with Whisper and Gemini, using Python to process, segment, and align audio with text, including multilingual support.
Explore advances in Audio Intelligence: multimodal systems, speech recognition, TTS, enterprise controls, creative workflows, and ethics for robust, secure, and accessible audio solutions.
Explore audio sentiment and command analysis using Pydantic AI and Gemini; learn to extract emotions and recognize spoken commands from audio with real-world datasets and hands-on exercises.
Explore voice content moderation: real-time and batch pipelines, compliance, privacy, layered detection, and operational excellence for secure and fair audio classification.
Learn to build a voice moderation system using Gemini to transcribe audio, detect personal data disclosures, and flag policy violations in customer service recordings.
Discover how enterprise video AI overcomes temporal complexity using smart frame selection for efficient understanding, search, classification, moderation, and generation at scale.
Explore key AI models like YOLO for real-time detection, CoTracker and TimeSformer for motion and temporal understanding, enabling advanced, scalable enterprise video analytics.
Learn how to detect and track objects in videos using YOLOv9, apply multi-object tracking, handle small objects, and count items crossing boundaries in practical scenarios.
Explore methods for video analysis and search using foundation models and CLIP4Clip, balancing temporal understanding, cost, and retrieval accuracy for enterprise applications.
Explore video understanding with Gemini and Clip4Clip: learn automated video description, key moment detection, and natural language video search using AI models and structured outputs.
Learn to classify and moderate video by modeling temporal patterns, handling real-world challenges, and combining automation with human oversight for scale, accuracy, and compliance.
Learn to build automated systems for video classification and moderation with Gemini and Pydantic AI, including action recognition and safety compliance in real-world scenarios.
Explore generative video AI tools and workflows that turn text, images, or footage into dynamic content for marketing, training, and creative use while ensuring quality and compliance.
Learn to generate marketing videos with Veo 3 using both text-to-video and image-to-video workflows, and understand their strengths, limitations, and real-world applications.
Explore deployment of multimodal AI systems for text, images, audio, video via unified APIs, multi-API orchestration, and custom solutions, balancing speed, cost, and control.
Explore tools and strategies for implementing, serving, and monitoring AI solutions, from rapid prototyping to production, including unified APIs, orchestration, and managed platforms.
Learn to build multimodal chatbots and analysis apps using Gradio and Pydantic AI, covering async programming, media inputs, rate limiting, and interface customization.
Learn to monitor and log multimodal AI systems, tracking performance, costs, and failures across modalities for optimized, reliable, and coherent production deployments.
Learn to implement logging and performance monitoring for multimodal AI chatbots using Gradio and Arize Phoenix, enabling robust analytics, debugging, and cost tracking.
Learn how to evaluate multimodal AI apps using user feedback systems and testing methods, blending human review, automated metrics, and continuous monitoring for quality improvement.
Learn to build robust testing frameworks for multimodal AI apps using Pydantic Evals, covering structured outputs, semantic evaluation, custom evaluators, and hands-on exercises.
Learn strategies to scale multimodal AI: unified APIs, multi-API pipelines, and custom deployments, focusing on performance, cost, reliability, and architectural trade-offs.
In this project, students will create an AI agent that simulates customer service scenarios and specialized monitoring agents that analyze communications across text, images, videos, and audio.
1 instructor
Unlike typical professors, our instructors come from Fortune 500 and Global 2000 companies and have demonstrated leadership and expertise in their professions:

Giacomo Vianello
Director, Machine Learning Engineer

Giacomo Vianello
Director, Machine Learning Engineer
Learn how AI creates and interprets speech, images, and video. Build AI systems that create and understand content in multiple modalities.

Subscription · Monthly