As part of my work of rapidly coming up to speed with creating AI applications, I’ve decided to start an ambitious project, and to bring you with me. I call it HyperVideo.
Picture a rectangle in which a video is playing (typically news or an instructional video, but it can be anything). Next to the video is a transcript. Under the video are a few buttons. One pauses the video and lets you inquire about something that was just said (or anything that comes to mind for that matter). The video player now takes you to Wikipedia, Wiktionary and any other public source that might best answer your question. It may even begin a related video (hence the name HyperVideo).
This touches on a number of the more interesting aspects of Azure AI, among them speech to text, generative AI and I’ll probably build it out to handle some additional functionality as I go (translation to other languages? text to speech?)
I expect this to be developed with a timeline like a hockey stick. That is, slow going at first, and then rapid development as I learn more and as the project comes along.
First step: ask CoPilot to take a crack at it, using Blazor as the front end and some combination of SQL Server and Blob storage as the back end. SQL Server may be overkill, I’ll look at some lighter weight alternatives as well.
I’ll keep you up to date as I go…





































