![[Best AI Tools for Creators 01.png]] ### AI Crash에 대한 생각 - 모든 현상은 설명이 가능하다. 기술의 변화 속도는 더 빨라지고 있다. - 다소 Hyperbole이 있지만 거품 속에 있다고 생각하지 않는다. 이것은 기술이고, 규모의 경제를 가지고 있으며, 흥미로우면서 또 무섭기도 한 그런 기술로서 의미를 갖는다고 생각한다.(불같다) - 우리 산업은 이미 성숙해져있고, 새로운 기술과 함께 다양한 일이 벌어지고 있다. AI라는 기술이 등장하고 많은 기업들이 이게 무엇인지, 어떻게 사용할지를 연구하며 이 과정을 거쳐가고 있다. - AI라는 스티커를 붙이면 돈이 되는 과대 광고가 나타나지만, 기술이 등장하고 성숙해져가는 과정에서 자연스러운 현상이라고 생각함. 이것을 경계하고 분별력을 갖는 것이 필요함 ### AI 활용 WorkFlow - Reene Teeley - 현재의 워크플로우에서 통합되어 사용할 수 있는 AI를 최대한 활용하려고 함. 예를 들어서, Adobe를 워크플로우에서 활용하고 있기 때문에 Firefly를 사용함 - Hijabi, Punjabi와 같은 툴을 사용해서 스크립트를 작성함 - Text-based Edting 기술을 호라용해서 Word Doc와 같은 형태로 영상을 텍스트로 추출해주면, 텍스트를 지울 경우 해당 영상 부분이 편집되는 기술을 사용하고 있음 - Roberto Blake - Adobe Premiere의 AI 트랜스크립션 기능을 활용해서 자동으로 자막을 생성함. SRT 파일로 export하여 파일을 수정해서 youtube에 폐쇄자막으로 올림. 이 때 다양한 언어로 번역하는 곳도 충분히 가능함 - 이런 작업을 통해서 20분짜리 영상 콘텐츠의 경우 기존에 1시간 정도 걸리던 작업이었음에도 10분 이하로 줄여서 작업을 할 수 있게 되었음 - 이로써 다른 창의적인 작업에 더 집중할 수 있게 되는데, 사운드 디자인에 더 집중하거나, 반복적인 작업에서 벗어나 창의적인 과정에 더 에너지를 쏟게 됨. - Focus와 멀티플랫폼 콘텐츠를 제작하고, 팟캐스트에서 AI를 통해 트랜스크라이빙 한 내용을 바탕으로 에피소드 요약을 만드는 등 workflow에서 기계적, 반복적 작업을 줄임으로써 다양한 일을 더 해낼 수 있게 되었음 - Dylan Jorgensen - **연구 및 스크립팅**: - **ChatGPT**: 콘텐츠 주제에 대한 아이데이션을 위해 연구를 시작할 때 사용함. 연구 논문 등 긴 콘텐츠를 입력해서 중요한 부분을 찾을 때 도움을 받음 - **Gemini 및 Claude**: ChatGPT 외에 사용되는 다른 AI 도구로 활용하고 있음 - **비디오 편집**: - **Gle**: 비디오에서 "um"과 "ah" 같은 불필요한 부분을 제거하고, 같은 문장을 여러 번 반복한 경우 처음 두 번을 제거하는 기능을 제공함. 완벽하지는 않지만 이 과정을 통해 첫 번째 편집 과정에서 80~90%의 시간을 절약함 - **Suno**: 텍스트를 오디오로 변환하는 도구로, 최근에는 **Yu gi oh**가 더 나은 성능을 보여주고 있음. - **B-Roll 및 애니메이션**: - **Fusion Model (DALL-E 및 Midjourney)**: 이미지 생성에 사용되며, 캐릭터 기반의 이미지를 여러 번 생성함. 일관성을 유지하기 때문에 사용함 - **Runway ML**: 애니메이션을 추가하고, 모션 그래픽 스타일을 적용하는데 사용함 - **캡션 및 최종 편집**: - **Submagic 및 Taps**: 짧은 형식의 캡션과 같은 작업에 사용함 ### AI 유료 모델 사용 이유 - 사용빈도가 높고, 더 뛰어난 업무 퍼포먼스를 기대한다면 유료버전을 사용해야 함 - GPT3는 1,500억개의 파라미터를 가지고 있으나 GPT4는 유료 버전으로 1조개의 파라미터를 가지 있어 더 우수함 - AI를 바라보는 방식에서도 흥미로운데, AI가 일자리를 빼앗은 대상이 아니라 돈을 주고 고용한 프리랜서로 바라보고 활용하고 있음 - AI를 실제하지 않지만, Copoilot으로 생각하며 작업을 함. 즉, 비용을 주고 고용한 프리랜서처럼 생각하며, 모든 콘텐츠의 원 소소는 창작자인 나 자신이고, AI는 이런 창작자를 도와주는 Copoilot으로서 돈을 받고 일하는 훌륭한 동료라고 생각하고 있음 ### 이미지 생성 AI의 제약 - 인포그래픽 생성에 한계가 있음 - GPT는 텍스트를 기반으로 이미지를 생성하기 때문에, 인포그래픽과 같이 세부적인 요소와 비쥬얼이 중요한 콘텐츠에 대해서 좋은 결과를 만들지 못함 - 인포그래픽을 위한 영감을 주는 요소로는 좋지만, 실제 인포그래픽의 세부 작업을 해야 하는 상황에서는 크게 도움이 되지 못함 - 이미지 생성 품질 문제 - AI를 통해 만든 이미지의 품질은 좋을 수 있지만, 창작자의 의도와 다르게 생성이 되고 이를 세부적으로 정확하게 표현하려고 할 때 AI에 대한 통제력이 없어 결국엔 전문가를 통해 세부사항을 수정해야 함 - 이는 AI가 텍스트로 이미지를 생성하는 구조상 Source와 Context에 영향을 많이 받는데, 생성하고자 하는 이미지를 텍스트로 타이핑한 내용(Source)가 실제 생성될 이미지만큼 정교할 수 없기 때문에 발생함. 또한 창작자의 머리속에 구상되는 맥락과도 달라 다른 이미지를 생성할 수밖에 없음 ### AI 더빙 - MrBeast(미스터 비스트) 사례 - MrBeast라는 유튜버는 현재 2억 5천8백만명의 구독자를 가졌음(24.5.16.기준). 유튜브 구독자 수가 늘어나는데는 AI 더빙이 큰 역할을 차지했음 - 자신의 채널을 여러 언어로 확장하기 위해 AI 더빙을 활용했고 그 결과 다양한 로컬 유튜브 채널을 바탕으로 구독자 수를 크게 늘릴 수 있었음 ![[youtuber MrBeast.jpg]] - AI 더빙 기술 - 유튜브에서는 여러 실험을 해보고 있으며, AI 더빙 기술이 콘텐츠의 화자 입술모양과 싱크가 되도록 번역이 되는 것을 실험해보고 있음 - 이로 인해 MrBeast와 같이 새로운 유튜브 스타가 인도네시아에서 나타나도 전혀 이상할 게 없는 미래가 다가오고 있음 ### AI 활용 아이데이션 - **ChatGPT 및 다른 AI 도구 사용**: - **ViQ**: 유튜브 비디오 제작을 위한 도구로, 분석 기능과 AI 도구를 통해 간단한 키워드로 제목, 스크립트, 오디오를 생성함. 비디오를 제작하기 위한 아이디어를 얻는데 사용함 ![[AI tool of VIQ.jpg]] - **Gemini 1.5**: 구글에서 제공하는 AI 도구. 전체 비디오를 업로드하고, 비디오의 성과 예측 등에 활용 ![[AI tool of Gemini.jpg]] - **비디오 제작 과정에서의 AI 활용**: - **Tube Spanner**: 스크립트 작성 및 제목선정 시 AI 추천 등 활용![[AI tool of tubespanner.jpg]] - **MidJourney**: 비디오 디자인, 컬러 팔레트, 조명, 포즈 및 모델링 등을 구상![[AI tool of Midjourney.jpg]] - **Runway ML**: 배경을 제거하거나 애니메이트하는 기능을 제공 ![[Runway.jpg]] - **Super.fans**: 팬 데이터를 분석하여 콘텐츠 제작 및 마케팅 전략을 개선합니다.![[super fans.jpg]] - Opus: 콘텐츠를 자동으로 클립으로 나누어줌. 이를 통해 더 많은 조회수를 얻거나 홍보할 수 자동으로 클립을 생성 ![[OpusClip.jpg]] --- ### What AI is good for? - 현재 상황에 대한 패널들 생각 - 이미 시장이 상당히 성숙해졌다고 생각함 - 어떻게 임플러먼트 할지, 생각하고 있다. - hype cycle - AI를 5학년에게 설명 - 친구다. 일을 도와주는. - 종종 에러를 주기 때문에 체크를 해야 한다. - 시간을 적게 쓸 수 있고, 원하는 곳에 더 쓸 수 있다. - AI는 소프트웨어가 아니라 툴이다. ### AI Goal - Renee - adobe 크리에이트 - 카자비? 스크립트를 하려고 쓰고 있음 - word doc을 통해서 스크립트 기반 에디팅을 사용하고 있음 - Robert - 프리미어 프로, 트랜스크립션을 쓰고 있음 - 생성형 AIfh zoqtusdmf todtjdgkrh dlTdma - ㄴㄲㅆ vkdlfdmf ㅛㅒㅕ쎠ㅠㄸ에 다른 랭기지로 추출할 때 씀 - 워크 플로우로 쓰는 게 굉장히 다양함 - 팟캐스트를 하는데, AI 번역을 씀. - 챗 지피티에 트랜스크립션을 보내서, 요약을 형성해서 다른 포맷, 플랫폼에 보낼 때, 그 내용을 사용함. - Dylan - 8개의 툴틀 사용하고 있음 - 리서치를 챗 지피티로 시작함. - 제미나이 클라우드도 씀. - 클링(?) 프리미어랑 비슷함. 처음 투 테이크를 없애고, - 사운드 디자인을 위해서, 텍스트 프롬 유디오라는 툴을 씀. ㅇ그냥 뮤직 이상이다. 수노와 유디오. - 달리, 미져니 같은 이미지 제너레이션 툴을 사용함. - 매직이라는 툴로 캡션을 씀 ### why pay AI instead of using for free? - Dylan - 더 똑똑한 사람을 고용하는 게 옳다. - Robert - 내 자체가 source of orgin 이다. - 우리가 원하는대로 빌드할 숭 ㅣㅆ기 때문이다. - Renee - 업데이트가 되면 확실히 낫다 - 멀티 모달, 달리, 이미지를 만들고, 텍스트로 할 수 있다. - Mad libs 를 챗 지피티랑 같이 쓴다? - 어떤 질문을 할지 몰라도, 그걸 물어보면서 더 정교화해서 물어보고 학습시키면 된다. - 레이아웃 디자인, 스토리 보딩 같은 걸로 써라. 왜냐하면, 이미지를 달라고 했을 때, 마음에 안 들 수 있다. ### What is not good for? - Robert - 인포그래픽 이런 건 레이아웃이나 이런 게 안 좋다 - 캐릭터나 이런 것들을 일관성 있게 디자인하려면 사람을 써야 한다. 그걸 스케일업하거나, 배경을 바꾸거나 할 때는 사용할 수 있다. - Dylan - 비디오 잘 아 ㄴ된다. - 근데 다 잘 된다는 것 같다 - Renne - 원하는 정확한 이미지가 있어도 그걸 아무리 정확히 설명해도 원하는 이미지를 원할 순 없다. ### Dubbing - Robert - ditto dubbing, 로컬리제이션. - 오징어게임이 나올텐데, 이게 로컬리제이션이 되어 있으면 대단하다 - 구글도 유튜브에 이런 걸 할텐데, Ditto를 쓰는 이유는 빠르고 정확하기 때문이다. 그러나 쓰진 않다. - 다음 유튜브 스타는 인도네이사이일 수 있고, 정말 다양한 사람이 될 수 있다. ### Ideation - Dylan - 아이데이션 자체에 처음 에 쓰진 않는다. 창작성 자체는 지뉴인 하게 내가 하고 싶다. - 그러나 착수를 시작하고 나면 배리에이션, 확장에서는 지피티를 더 사용하면서 아이디어를 얻는다. - Robert - Spotter AI를 써보고 있는데, 처음에 시장 조사나 이런 걸 하기에 되게 좋다. - 1/10.com - tube spanner ### Production Renne - Sony 카메라가 안에 스마트 자동화 기능을 넣어두었다. Dylan - 어도비의 노이즈 리덕션 - 파이널컷을 베이스로 쓰는데, 마스터링을 HDR 265. Robert - 시각 분석으로 달리를 쓴다. 어떤 요소가 들어가 있는지를 분석할 수 있다. - 달리한테 사진을 주고, 정보를 받는다. 이미지든, - DGI음향을 향상시키기 위한 AI를 쓴다. - 어도비 익스프레스, 클린 백그라운드를 만들려고 할 때 완벽하다. Renne - kajabi ### Audio Rennee - 버튼 하나 누르면, 모든 걸 분석해서, 배경음을 스튜디오처럼 깩스하게 해준다. - Sound Effects? - UDIO Opus --- Make sure you come back because we have a lot of really good speakers here. But I'm super excited about this panel. This panel is all about best AI Tools for creators. I'm gonnAInvite my panelists up here, Roberto Blake, who's founder of a Creator Academy rate Strategic Partnerships from Changer and Dylan Jorgenson, youtuber, Dylan, curious, curious future productions. Come on up guys and as they come up, thank you. I'll sit down in a sec. ==How many of you guys are using AI tools in your creativity process right now?== Yeah, we got a little bit of depth in that. A little bit there. A little bit of um how many, OK. ==How many of you find them useful?== Good, good, good, good. ==How many of you are confused with AIs problem?== That's good. A year ago, Renee and I were here and we did something on AI and everyone was like, what, what are you doing some music? ==How many of you worry that AI is gonna take your job as a creator away.== You got one, a couple over there. Yeah, a couple of people over. ==How about copyright getting your content stolen?== Yeah, people there too. Right. Well, we're gonna cover some of that. We are gonna cover tools too, but let's start off with a little bit of a hype. Like I said, a year ago we were here and we're trying to figure out what AI was good for six months ago. It was like, oh my God, it's gonna do everything. It's gonna replace everything. And I've fallen in love with an AI now I, it feels a little bit like, I don't know, the balloon is off the road a little bit like uh maybe it's not as good. ==Are we in the middle of a crash, Dylan? Are we in a crash with A, I, I don't know.== I mean, I think everything is happening in explanation. I was thinking stuff like so is gonna just blow people's minds the same way in each generation. Did. Um It might feel that way because of how marketing works, but that technology is progressing along faster and faster. And the island Roberto, what do you think there might be some hyperbole out there? But we're not a right. Yeah, just a little bit. But here's the thing. We're not in a bubble at all when it comes to that technology and the implications that there are two economies of scale are, they should be exciting and terrible. Yeah, Renee, what do you put? Yeah, I think the industry is just maturing and this happens a lot with any new type of technology. And so I don't, I think we're over a little bit of a hype cycle and we're over just companies slapping a, well, we're not quite over this, but right now there's still a lot of companies that are just slapping AI onto something. So I think we're, we're still kind of going through that, but the industry is just maturing and we're figuring out how to actually use it and implement those credits. Don't, don't knock that sticker because that sticker is helping my stock portfolio. So just, well, no, but I do think we're in a little bit of a he cycle right here. I think that we see a bunch of companies coming out with like a thin layer of AI on top of it and raising it years uh round. And so there will be some excess run out of the system. It's natural he cycle for G this happens when you see all those stories about AI don't buy it. There's three things you can do, but there's things that you can't do right now. Also talk about some of that as well. So real quickly, just a level set, you guys are all smarter than 1/5 grader, right? OK. ==Explain the AI explain AI to 1/5 grader.== Um Yeah, I would say it's kind of like having a friend that can give you all of the answers to a test, but they may be all of the wrong answers. You have to fact check them. Yeah. I think honestly it's, it's kind of like having a little bit of a friend where you can use it for good and for bad. So you can spend less time on the things you don't like doing. Just kinda work on, on the things that you from a trust for Roberto. AI is like having a very good friend similar to Michael Ross. But a photographic memory that has an array of knowledge that may or may not be useful have to make sure you deep dip into the tequila. Before you ask, I keep the tequila away from your AI model. Uh I would say that AI is not software AI is a tool that learns to make a decision. It's not programmed, it's not rearranged, it's not a decision made. It's the first time really on earth we've had something learned and it teaches us. Yeah, I mean, and basically for those of you who remember statistics in college, it's a really, really massive statistic center, right? It's predicting what words to use and what images to create. Fascinating. Let's move on a little bit and talk about what you guys are doing with AI and your workflow today. ==May I start with you? What AI tools are you using as a creator today to, I don't know make better content, make it easier or I don't know if play games or what.== Uh Yeah, my go to is AI that's already integrated into my existing workflow. So instead of just finding new tools and technology, there's a lot of AI that's being incorporated into things like Adobe Creative to which I use. Um And so that's kind of throughout, I also use uh Hijabi uh the script and uh full disclosure, I've done some work with Punjabi and just, we're gonna talk about what the Java De scripts are. Absolutely. So um I'll start with the script. It was really one of the first companies that allow you to upload a video, transcribe it and then edit it like a word doc. So if you wanna delete a part of your video, you basically just highlight um the text in a document and hit delete and it would edit that part of the video. They since added a lot of different AI technology in terms of being able to um learn your voice. And so if you're able to screen it on your own voice, you can type things out and then have it actually say something in your own voice. Um And it works in my opinion incredibly well if you're using, using it for just um a little bit of, of audio, not a full video, something like that. And then in Punjabi just to quickly describe that uh Punjabi is a tool that helps with um uh basically to manage your creative business. So if you want to release a membership site or sell courses uh do email marketing, you can do all that within Hijabi and they have AI tools to allow you to uh create courses and also help do scripting. And then um also you can take like like one piece of content and you turn it into uh 40 different pieces of content. So you can take a video and turn it into a blog. And uh yeah, the repurposing work. ==Yeah, Roberto, what are you using today in your work?== Oh Lord. Um I certainly do that through the short. Just a, just a couple of highlights to talk about more in the taste. Um So with Premiere Pro from Adobe, um actually run through the Adobe sensing technology that they involved for AI uh transcription first. And then I also made generate AI captions from that as well that then can video or any captions if I wanted to be baked in or I could just use them to export SRT files so that I can get to youtube or else and have a closed caption. It's also capable of doing that in other languages. The other thing that I can use it for also though is I use it in my video eding workflow because I can now detect all pauses and then the AI will go through and it will actually do ripple deletes throughout. That can do all of my rough planning for me, then I can use it to do the audio enhanced feature and go back and get um studio quality master audio for my a talking head portion. So it produces what could be on a 20 minute um end to end video, the turn around on that. But we warmly been an hour work between premiere pro and audition and then also pay to go to red.com and my captions that uh if it's actually able to turn about an hour and a half of work to a less than 10 minute process, which is quite amazing, that's phenomenal on that side. Um So I have that and then the question gets big. ==What do you do with the extra 50 minutes or hour that you have back?== Oh, well, in that case, what I get to do is I get to be a little bit more thoughtful about certain aspects of sound design. I think it's better videos. So that with the time that Norman has been on the agonizing task of mechanical work, I can now put for the energy into the creative process that actually uses the vision that I have for the video and work more on that. So um that's just one aspect of that from a practical workload standpoint, I use focus I partnered with and then take set video and I'm able to extrapolate and repurpose multiple videos for my cross platform schedule then in the vertical short format and also with widescreen clips that I can use and purpose for syndication. Um And so I'm able to do that. That's uh for AI as well. And what's uh one of my favorites, what I can also do, by the way is I can actually for a podcast. Uh I love my podcast work for stuff. So I think a lot live during Extreme Yard also goes and produces AI transcripts as well. There is the funny thing about um with that, I also sit there and I can actually take my uh transcript to check, check to the T with a custom prompt that I wrote to extract the accurate show notes from my transcript and then use that and um then have those as well as have episode summaries both in short form and long form. And then I also get to generate quotables from my guest that I can then use to make graphical quotable that I can use another platform. Very cool, good stuff. ==Um Click by the way is over there afterwards. You should definitely go check them out. I love that stuff too. Um Awesome. How about you? So you both use AI but also explain AI for people. So talk a little bit now about what you're doing, but if you want to add it,== Yeah. So I have a youtube channel. It's called Really Curious and I talk about AI sort of like novel, you know, but I am paying for eight different AI tools and I'll list them out just so you guys know after working with a whole bunch, these are the kind of ones that have survived. I always start my research using J I GP T for 20 bunch of months. Everybody's used it. But Jim and I caught both other good options and I pretty prompted with something that I've already written that describes who I am, what I care about. I'll throw in huge long context window papers research that would just be agonizing to read and let it find the most important parts to me. After that process, I filmed the video. So help me with scripting and then there's a tool called Gle uh and I'm not great in video things, but Gle is a very what you're using with the w it takes out all the ums and ah's if I say the same sentence two or three times in a similar way, it will take the first two tapes out. So the nice thing about it is that in scripting, you just say something over and over again. And then the very last one, you know, is going to be the one that survives and then you have to tweak it a little bit. It's not always perfect, but I use it to say probably 80 to 90% of the time on my first edit. Uh After that, I for sound design, everybody's been using um this, this tool called Suno, right? Like you guys might have heard that that's the best way to text prompt into audio. Uh But um as of like two days ago, I think this new tool called Yu gi oh is even better. Uh I'm making a video about it now. It totally took me by surprise but yo can actually generate full like comedic timing, comedy scripts and so much more and just music. And there's a lot of use cases there that you might see popping up. So those are the two models of Puno and yo, yo, uh when I get my video in, I do B roll with the traditional um fusion model. So dolly and mid journey imagery if you want to keep a character based on multiple uh image generations, but also I'll take that into another tool called runway ML. And that will give me a little bit of animation. Sometimes I can add little like slides or pro to just give it kind of a little bit of a motion graphics look. And I kind of customize this specific paper, cut graph style look for you guys to just find something that, that kind of works for you to keep that unity. Um Yeah. And then at the end of the day, I throw it in one more tool, which is called sub magic. There's another one called taps which just help with the short form, uh kind of captions and things like that. ==So, yeah, I wanna talk about Chat Two boutique for a minute. How many of you are using? Chat? Two boutique regularly? How many people pay for it? Ok. Just a couple. Now I pay for it.== You do? You do? So let's start with you since you talked about paying for it. Why pay for it instead of using it for? Because you just use it at scale. I mean, it's, it's just about how much you use it. If you're using it occasionally, it's not that big a deal. But um when it really comes down to it, GP T four is a trillion grader model. And GP T three is a 150 billion grader model. And there just is real intelligence difference in there. Now, if your GP T three is the free version four is the paid version, you probably all knew that. Yeah. So, I mean, it's just, it's just about, do you wanna hire somebody who's OK? Or do you wanna hire somebody who's smart? It depends on what the job is if you need it either. ==So, Roberta, why do you pay for it?== Keep is fractionally workforce of all time, cheapest, fractional workers of all time. So for $20 a month, I'm using the best AI model. I've already made myself hundreds of customized crops um and stuff like that. I share that with my community. And the other thing with it is the way I use it is I use oh task goal. And then I give a context and that's one of like 10 frameworks that I came up with for Prompt Engineering. And the way that ends up working is the whole ideAIs if I wrote a job description for somebody in terms of the roles and responsibilities to actually treat it like a workforce, I treat it as if I'm building out individual uh GP TS to serve the function of a task at a higher level team. And as a result of that, I have these copilots, these creative copilots, they're not really there in my estimation to generate things because I'm the source of origin, I'm the source of origin. I'm also the source of context, they're there to then extrapolate, amplify and modify and add to my work in the same way that a freelance workforce or even a staffer would if I were working with them remotely. And so when I speak to the AI, I treat it like a note worker as essentially a fraction within my. ==So you're using GP TS, how many people know what A GP T is?== Let's talk about that for a minute because I think it's a really important and under understood under understood what it chat GP T could have come up with a better word. But anyway, um A GP T is a way, you know, the, the thing about chat GP T and most of these models is they're brain dead. You tell them something, then you go away and you come back, they've forgotten everything you told them. Or with a GP T, you can actually say what you said. You can give it a role, you can give it uh a task, give it a go put it in a GP T and then when you come back to it, it knows it again. So it, it, it gets you that much further. I use one creating events like this where I give it a bunch of my old session descriptions and titles. I tell you it's a great session description and title writer and I give it a, a session of notes and it comes up with descriptions and titles and I describe that. Well, who wants to describe it? Better? Go ahead Dylan. Well, let me just throw in my thing is I think that if you understand again next year, you're gonna hear a word called agentic work flows quite a bit. You don't hear it much, but that is the agentic agentic workflow. So, and what agentic is referring to is agents, multiple agents. So you call them G BT. They're custom prompts for a system like 10 G BT and it has a character, it has an understanding of who it is, what it does. But in the future, there will be software developed by say 10 G BT S one will act CTO one will act like a CMO one will code it and then it will send it to a Qaaq A will send it back to the coder. The coder will send it to the marketer but these are all gonna be individual GP TS, they're gonna be individual agents. Is the more general terms of agentic work flows are where you have an entire company, an entire system, all working together versus my first scenarios. Yeah. So we're all losing our jobs, all companies are being replaced. But the point is with Chat TPT, the TT four is if you pay for it, you get access to this, you can build your own. So it can be simple. So it could be more complex when you talk about why you pay. Yeah. So I because I use it every single day. So the $20 a month is a no brainer for me, you get access to the latest version available and each time they release a new version or an update, they did a couple of days ago, right? Yes. So each time they release a new version, it's typically substantial better than the last. Um So there are different features that are available in it. So you can have multimodal so you can access uh do so if you want to create an image, you don't have to do something separate because if you're just working with text So for me, it makes complete sense to be able to pay for that. I wanna layer on top of what Roberto said in terms of how you structure your PS. I do something quite similar. Um I basically play Mad Lips with my chat G BT prompts. Um And I think it's really important when you're using something like chad GP T to give it context. And so I always give it a bowl So similar to how you're giving yours a bowl. That's uh a certain type of employee for me. I say you are like you're a creator, economy expert leading a session at NAB. So give it some context, have been replaced. Um So give us some context and then ask what you want and what format. So if you want a summary of something or if you want a bulleted list, um make sure that you're giving them giving that context and then you're gonna get a much better results. Just one more thing to do. I don't know if you've tried this, but also if you don't know what you want from these systems, you can ask them to ask you because I want to build software. You're the CEO tell me what you need for me to do a good job and you can actually go his birth through here. Here's a prompt for you guys to actually help extrapolate, not knowing what to ask. Um You can work backwards from what you might want to achieve and said, if I wanted to achieve this, and I wanted to be able to ask the best questions possible on the way to achieving that goal. What are the 20 most practical questions that I can ask? And you can, you know, get generate those questions and then you can ask even them from just like fantastic. And can you elaborate and you can answer some of these and then you can go and go from there. And so you can use it really frankly reverse engineer any of your own goals. And you can use this um to some degree to train you up and to make you more knowledge. All right. So if we convince you guys to subscribe to job 254 that you're not making any money at this, but uh we all use it. I, the last thing I wanna say on this, I wanna move on. Um It got a lot of tools to talk about is you talked about dolly and the image stuff and i it's, it's not as what I mean. I cannot overemphasize how important I find that because I use mentoring for a while and I stopped at my fusion. It may not be as good as those on some cases, they could do a better job, but you can actually have a conversation with it. It gives you an image and you can say, oh do this, do this. So having your conversations and try to get images to be really, really helpful. And I think just the people, a lot of it for images, a lot of you are trying to over make it do too much work on images. And it's not gonna be um say as good as working with someone as a human being on that. But I will tell you this, use it for layout design, this will blow your mind in terms of layout design, layout, inspiration and storyboarding. Yeah. And last thing I will give you all the images and I was like, you know, almost eight years old and I learned this. If you create an image in dolly, create and chat, you PC K and the image shows up on screen, you click on it and you know, it shows up bigger, there's a little eye up in the upper right hand corner. If you put that eye, you can see how it took your prompt and rewrote it into a prompt that it fed into it. Why that's important to me? You can copy that prompt and then tweak it. So it's really cool. So it's a really cool I just wanted to OK. ==What's it not good for?== Who wants to say something that they tried? They thought it would be good for. And it's just, it's not good at creating infographics. It's not good at creating anything that has to be visually text based and it's not good at typography. There is one tool whose name escape escapes me that Dylan probably lives affection. He is good at um typography when it comes to generative AI but most visual generated AI tools. Absolutely suck. And are atrocious at infographics and typography. I do recommend for infographics though. They can be good for layout and inspiration. Do you want to uh take that on infographics or, or what I mean? Yeah. With words and it doesn't do fingers, it doesn't, does it do? Yeah. Yeah. I mean, it, it can do some things. Hr Six Issy and it could keep the basis. I mean, there's not a lot that it can't do until you just move to the next modality. So right now it doesn't video. Um Well, and then you just see everything kind of moving into other spaces. But I don't know. It does, it does so much stuff it can bring brain scans now. It can do all sorts of things that don't even seem like it feels like it's gonna go. So I'm very impressed why it doesn't do well. Does it not do the? Yeah. Well, it, it does do this but it doesn't do well. So I think that when it comes to images specifically, if you are stuck and you don't have something in your mind of exactly what you want that I have to look like, I think it's really good at, at. So if you get something general, it can come up with something really interesting. But if you have a very specific image of what you want and you're looking for someone with a yellow blazer and, you know, they have purple hair and wearing glasses and they're a certain age and you put all of that in, you're probably not gonna get what you actually want. It may take some time. But I think if you want something like super specific, I still like struggle to get exactly what I'm and I can find myself getting really frustrated, going back and forth until eventually I'm like, no, I just do something a bit. It, it's only as good as the source material and context that it's given. That's why I keep saying it's an enhancement. That's why I don't like the concept of really being overly emphasized on being generally. I think it's real value is scale. I believe that for example, if I go and I get a freelancer or five or let's say the character design for me now, it makes only six and ability to have consistency and continuity there. I can still pay an artist to do real artwork to come up with something that we crack together. As me, then I can use the AI tool to scale it and add different dimensions to context. Maybe that artist is good care for design, but they're not good at action per se um or they're not good at um background. And so that's what now I can take this thing and I can then layer on top of it and use the AI enhance what is already there instead of trying to come up with something completely new, which I think it's actually not as good at merely as a human. Same thing with writing. Like why you feed in my book as its context and to try to anticipate it to my taste and my so and so forth. That's gonna be better feeding at my whole book than it. Asking a 20 to 50 questions. Trying to figure out my personality. Everything is always as good as a source of material. And that would hold true if you hire a human being as well, materials that you give and supply your workforce to work and build on top of are still the ultimate foundation, which is why I still feel we as humans are not replaced by the AI because I think we can enhance what we do. And I think that the complement that we're better together than, than standing alone. I think that we're better CGs than as purely techno uh technical um you know, machines or organic organisms. I think organic combos are gonna be better. No, I was gonna say I just side 11 last thing on images. Just uh we've been talking a lot about Dolly Dolly three, which is inside chat before, but you have to pay for it. But there is a freeway in it. It's called B image creator. And everybody's probably shy away from being like go better than I might talk to you from shoving at you. But go bing image creator. This is the dolly four dolly three engine in Chat T BT four. But it will let you, it won't let you have a conversation with it to make better images. You can take advantage of that model. Also, Leonardo AI is a great front and for state of diffusion which does some good stuff as well. It's also free up to a certain level. Um I don't know any out of free I tos and that like, well, it's not free but there are, there are, there is AI that's being built into tools that you may you have a subscription for. Um So like Canva or do? Yeah. Yeah. All right. I'm gonna go through a bunch of different categories. I'm gonna talk to these guys about what they like, what they don't like. So I can talk about it but uh idea production, editing, done, distribution, analytics, community management, modernization, marketing, back office, accounting, other business functions. Anything else? ==AI dubbing== dubbing? Am I not say dubbing? All right, let's start with dubbing, dubbing. Let's do dubbing first dubbing. Who wants to take it? What's a good tool for he burner? Um So ditto dubbing. Uh I'm not doing it. I don't have to show the company um or anything. Like that. But Dino dubbing is something I'm looking at. It was introduced to me by Daryl Beams. And it's because it was used initially before they hired a greenhouse team by Mr Beast to scale his youtube channel through vocalization so that there would be local youtube channels at first and then youtube consolidated his subscriber ship is why now he's got to pass this series as the largest youtube channel. The whole time, we got over a quarter of a million subscribers, that 150 million subscribers, the part that growth and uh was using the Netflix concept of. If you guys remember when Twin Game came out, came out and what to do it weed in every single language for every single country localized and this is incredibly powerful. Um And it's gonna change the entire scope and dimension of content creation and even how we start to think of content creation, we're not gonna only think about it. Um For example, for us as USAI, you know English language speakers, we start thinking about some levels of neutrality within our content, knowing that it can translate um over to those other language youtube and Google are doing their aloud which is on the um a free version. But you are gonna have some restrictions on um distributing that other platforms if you use their version. The reason I want to use the W is for I saw it gives me things like uh speed and accuracy and it can handle these open languages and it can also. So um it has some more context to it, even if you are a non-english speaker. And you're saying a German speaker, it keeps that German twang is the speaker's voice for authenticity uh when translating into English. So it's gonna allow a lot of people in the foreign market for the U market and UC PM and R PM rates in the ad revenue. And I guess I think it's massively transformative uh di owdw.com, it is fast and accurate but it is not cheating. That's its one G so the, the thing that when I think about dubbing, what we learned when youtube came out in the early days was that really creativity is evenly distributed around the world. We don't have to email either DC or Mumbai Sydney or London. But what's gonna happen with dubbing with AI dubbing and it's not just the voice and the dubbing the languages, it's also the voice changes. So you look like you're saying it is that it democratizes the ability to reach anybody in the world. So the next big youtube star globally could be an Indonesian and maybe he doesn't know I would make them English or German or French or whatever, but they can create content and truly reach a global water. That's why I like that. ==Anything else I'm dubbing for now,== I just wanna say that's actually one of the most exciting things about AI is the ability to connect with other people, especially if you don't speak the language. And so I think that's actually a really powerful thing. I don't personally use an AI tool for dubbing a technology just because I, I don't, um I haven't found a tool that I think is really accurate and there's a lot that goes into dubbing accidentally, um say something that you don't mean to say. Um So I just am very cautious about that, especially if it's dubbing into a language that I don't know. This is what Jim calls the my hubard craft is filled with disputed the e how many people know that reference and I found. Thank you. Thank you. Uh Yeah, and it is seen as a spectrum. There are tools like 11 labs right now to just take voices and they can turn into a language. But then you see the experimental stuff coming out of uh youtube's in house teams that make the mouth and the lips really move so you can't tell. And then I think about it in the spectrum over the next couple of years, they'll probably also be like if, if uh something in IndiAIs created and it's funny and it gets ported over to the United States, then kind of AI that we'll probably see in a couple of years will also probably change the background and more like an American city. Maybe you can change from like hair, skin tone, you know, um background and, and create yes, these things that are the same in concept but are completely different are individualized. Maybe all the way down to you individually as a user on, on youtube. So it's just nothing is, is also the the customization are interesting to me. That's interesting too, the implications there that is both exciting to terrify customized content for everyone. We'll try to get to that in a minute. You mentioned 3D. Also talk about 3D for Yeah. So once again, we're kind of in that same space where we are with SOA and video. I wouldn't say that there's any truly great 3D modeling like text to 3D modeling system out there yet in videos. That one, which is just my mind right now, but you can't use it and it will generate say like a, a broad, not a mushroom or something like you'll get something like that and given to a sculptor who understands how to work with 3D, they will save time, they will be closer to their final product. Uh And soon enough, I think we'll be at a point where, you know, kind of in the same division story that it is useful. ==Do you think we'll ever get somewhere very reasonable in the next 20 years?== They uh text A 3D and then combine that 3D printing to really go from ideation uh to actual as well because you thought to think for the type of, yeah, because you put 20 years on it. Anything can happen, I think. But yeah, there is real, there is a big, there is a big jump going from the digital world to the real world that I think material science has to kind of get involved. So there'll be a slower 3d print to real world kind of thing happening. But yeah, eventually I feel like that all gets worked out. ==Uh Let, let's talk about, let's get on this list. Idea. You talked about storyboarding with GP T four. What are some tools that you've used for ideas? Ideas about what to create, what to do Renee? What do you think? What do you, what do you, for example, is not chat.== Yeah. So I do use JT TT a lot for this. Uh Viq has a good tool that um good IQ is mainly a tool for youtube videos. Uh There's analytics that are behind it and helping you figure out what content works well and what doesn't. But they have an AI tool that allows you to come up with just with a simple couple of simple keywords, come up with a title, uh a script for your video and some audio. So watch and narrate your video and I don't use it as like a final piece for the video, but I may use it to spark some ideas. Um And I've gotten some pretty good results from that. Uh Yeah. So I, so because I need to produce videos every day and it's like a complete grind that I never wanna burn out. I actually don't try to use it too much in the very beginning step because I wanna find something that genuinely interests me just so I can get through it and not just do it some of the algorithm. But once I have established that to take what I think I want to write about and say, give me a bunch of variations of this for titles. Um Think about like what kind of um thumbnails might be helpful? All sm GB P actually gem uh just fairly came out a Gemini 1.5. Um If you guys sign up for it, Google. Yeah, it has a million context window which means you can actually upload your entire final video these huge things and say, how do you think this is gonna perform? Do you think I should edit something out of it? Um And it's, you know, I had the video, I just up point yesterday it's a 1.5 Gemini I was wearing this T shirt and it was an over fitted t-shirt. It's like a machine learning joke you wouldn't really get into properly once you're in the scene. Um And I asked him what's funny about my shirt? And is this gonna resonate with my audience? And it was a little wrong, I think, but the fact that we were even having that conversation from a polling video with pas and no direct shot of it uh is, is really just compressible. And I can see that I noticing that, that I'm testing uh spotter AI it's in uh beta and it's really impressive. It's really good. I think it's great for um a certain level of market research levels of idea and trying to really understand your audience avatar. But they're already essentially gravitating through to the sense it leverages the youtube the P I. So it has an understanding to some degree of scale of what are people already giving attention to you and your needs, so on and so forth. Um So I really like that. I love for aviation a tool called one of ten.com. Um I think that one is really powerful. Uh It's not going to again, I have no affiliation with either of these. Then um I would say that also looking to for um help with script writing structure from an AI standpoint, a tool called Tub. Um Yeah. Yeah. Tube Spanner. Yeah. Um They're not as well known as some of the other contemporaries out there. And I, I would say in the ideation realm, I like those a lot I already mentioned chat GP T um for uh lab area. And I do think that mid journey is excellent. If you prompt it correctly, you get a good um prompt like, you know, some of the ones I um when it comes to things like primary example, you can actually use it to help you with set design for your videos. But something people do not think about conceptually even examples of color palette, dynamic lighting, your R GB lighting, you could also use it for posing and modeling, which could also help you with thumbnail ideation. Instead of trying to come through a whole thumbnail, you can use it for looking just conceptually with ML to look at positioning and posing and lay out and things of that nature along with the dynamic light and then go OK. Yeah, that's exactly what I wanna recreate with my photography. Let me take that to my team for my single photography and my photography, my set design, my key of fighting all those things. So I think that um it's one of those things that can allow you as far as the visual side. Some people who here has ever uh worked with a client and have a client say I'll know it when I see it. Anybody show of hands who's gone through that personal health clients from hell. Right. Yeah. So like, yeah, um the uh now you can actually get through that process of, I'll know it when I see it a lot faster, a lot less pain and a lot less hours and a lot less some cost. ==And so you've done your ideas, you've got your story, you know what you're gonna do you get out there and do the production? What are the tools that you've used to help me with production?== Who wants to start? Um So I think that Sony has actually done a really good job of implementing a lot of the AI technology in their cameras. So I used a bunch of different types of cameras. I went from uh Canon to Panasonic over to Sony. So now I'm using a bunch of different Sony cameras, but they have built in a lot of smart automatic functionality. So before I used to always set everything to manual so that I can dial everything specifically how I want it. But now with the smart functionalities of Sony, which has AI built into it, it's pretty good right out of the box and I don't have to actually tinker with it and mess with it for every individual thing that you're setting up. So that's one of that's a good thing. That's pretty cool. So many cameras that uh there, there's a noise remover that a Ovo is for free. You can up lo like a ra a file and it takes away a lot of business and um and kind of emptiness in a room if you have eo that makes it sound a bit more cold production wise. I'm like, I'm using the journey to animate. I'm using final cut as my base. I'm using exporting pieces in the cap cut which uses all sorts of kind of AI effects depending on what I want from it. And then uh yeah, mastering it out into, I'm, I'm trying to do HDR. So H I 2655 was normal, but now I'm using the HE BC format so I can just get that for uh room color spectrum and, and youtube handles it. So yeah, what, what about your uh you could use um for example, the analysis, the visual analysis in um you know, chat you can see with the dolly three to actually uh break down the cinematic aspects of a single frame to then understand what settings and what things we need to do. So you can understand the lighting, the depth of field. You can punch in the settings to provide the production equipment. If you want to try to recreate a lot of what you see, start to understand. Oh, tell me and explain to me the color grade that I see. So then you can replicate all of that. ==Let me, let me make sure I understand this.== You take a picture image, you feed it in the Darwin. Yes, I think you ask if you the cameras that shock this, there's a specific way you have to write that prompt, but you can get those things and start to get a sense of from that single frame, those things that you would want, you can even look and analyze things like the color rating and try to replicate the settings from the color rating, which a lot of people don't realize that you can do that. So that can help you a lot. When it comes to that, you can develop your c want to put things to your camera on to your editor as well. With the Sony cameras, you can actually put the premade that's in which means you don't have to do it in post when it comes to your cinematography and the look that you want. So there's that DG I is using AI to enhance the audio. There's a setting for that if you're using the latest DG I wireless microphones. So you have that from the production side to enhance audio, you just have to do it in post. Adobe audio enhanced was great. It also exists in the latest version of Premiere Pro not just for beta anymore. So you have that um there's also with Adobe Express, they're doing a lot of AI things if you were uh here is doing a kind of intellectual products um or coursework, for example, of course, is you're just gonna be talking ahead if you did it, not against the green screen, that green screen, white background, black background, gray background uh Adobe expressed it actually is extraordinarily good when you use a clean background and good lighting for removing it and giving you essentially the field of the mainstream, which means for those of you who are developing games that might otherwise look like a lot less to talk in that video, you can actually then put something animated or video in the background or so on and so forth. And now without the effort for technical ability, a very great uh work is on the initial production is the game, you can now make this look very, very impressive. Um And that's how you use it for Adobe sweetness is using Adobe Express, which is their comparable um counterpart to say, for example. Yeah, so uh Adobe Express also just integrated with um with Punjabi. So there's an integration between the two. So if you are uh a Hijabi customer and you are wanting to create some type, of course, you can now use AI within Hijabi for a lot of the course functionality that they already have in terms of coming up with course content. But then you use Adobe Express to come up with the backgrounds and, and set design for your videos, including virtual sets, anything more on editing. Um They got a lot to get their own time. But I think we talk about Adobe and talk about script. We have pro what about music editing tools? There's a company called Protos over there about halfway down, it's doing sound effects that let some sound effects and using AI it's really cool other editing tools and that process when you sit down and take a shot and want to make you something beautiful. Yeah, I'll throw in the run ML. Probably deserves a little bit of love too. If you upload a clip to runway ML, you can either remove the background and I've done it with very unclean background and it's worked pretty well. You occasionally might be put dots on the arms or the fingers or something if it gets a little lost. But it's got tools for that, which is super useful if you just want to remove a background. But also you're gonna float a static image and animate it with a special bomb. So it's kind of got a couple of features there that I like. And then like I said that y oh is, is worth checking out man. It's like 23 days old. But can you spell that? I think it's Udio all capitals Pio is on my list. And I um I don't know tons about the, the company now, but when I saw one prompt in particular, which was to generate a comedy sketch, the joke made sense and this, I don't know what's happening in the background to make it make so much sense. The laughter in the room sounded like a Netflix special. Uh That's not the kind of audio stuff I saw coming out of. So I first, you know, before, so you know, that could be future. Yeah, when it comes to audio stuff, I typically use the script. If there's something that I need to clean up the audio because it's super easy. It's one hobble. So you get one click and um it's called Studio Sound and it'll go through, analyze your video and analyze the sound, uh and then clean it up and I think it's a really good job of fixing if there's background noise that you don't want there, but it's also really good at taking audio. The A sounds pretty good and making it sound great. Um And so that's why I want, I wanna talk a little bit about uh any editing, but in the quick side because when they edit something up, it's great. It's 20 minutes and maybe you wanna make pieces of it and then put it on short form to promote it. Or maybe there are other things you wanna do. ==You were miss, we didn't talk about our friends over there at Opus who I've been using and I love and I sort of work with them a little bit, Roberto.== I know you're a big fan. Talk a little bit about how, what that can do can help you in sort of the editing, but also post processing to create clips you can then use to promote what you do or get more views on what you've already done. So I am an o partner and they are sponsoring me. Uh you know how we got there. I used everything else and it wasn't great. That's how we got that. And uh and I what I mean, I used everything else. I paid to use everything else. So I saw, um, a certain amount of money into getting to the place that, uh, where I'm happy with what I have and what I, what I'll say about, um, what I find is different about opus is and there are things that I'm gonna, the feedback on prime example. Um In the future, don't swear them into this, but in the future, uh you're gonna be able to not only upload your stuff to be able to clip it, but you're gonna have the option to upload it and maybe have the audio enhanced for you. So you'll have to use yet another tool to do that thing to make your work flow better. So the thing is it's gonna be able if your clips um have this, that's gonna be better. Currently, currently what exists is you can add the Aib to your video clips to add more context and it generates the bro for you, right? Which means from a workflow standpoint, what I can do is I can decide that, all right, instead of just repurposing content, I already have. It's great at actually now moving to using Opus as part of the whole content creation workflow with me is point of origin. What that looks like is the concept of the fake podcast, which is put myself in an angle as if I were talking to someone like Jim without need to be prompt myself to give something that could be a sound bite or a hot tape. And I can sit there, I can do this over and over and do maybe, uh let's say, um, an hour long session like this. This means I have 50 pieces of my content and I don't even need opus for the clipping portion. Now, what I'll use it for is the animated text captions that you see in these clips and the ability to add Aib one to it, saving me massive amount of time of editing that I would normally have to do. And so again, in this work flow, now, I have a massive amount like a 60 to 90 minute session and then focus to a very fast turnaround. This I now have 50 pieces of content and now I think 50 pieces of content and distribute across five platforms on a schedule to my taste, which means I now turn 90 minutes of work into 250 pieces of content distributed. And that can be done in a day. And the thing is it's on the point of origin. If I make a template or more pro I save my settings for this purpose, I can even have a B A or a personal assistant do all of the uploading scheduling and everything like that. And I just review the file process. So I really can't 250 pieces of content in the day. 90 minutes to two hours of work and saturate the market with my brand presence all the time everywhere, all at once. So I'd say like the tool I told you about called Green Gling. It's one of the best things about it, cutting out the Ys and, and the takes that I didn't like is that you can also make it so that we punch in with like a wide and a close or a medium to the same shot. So I'm shooting in a full eight K. I have a Sony Alpha one and it seems kind of extreme because I'm just like in my bedroom basically filming, but it does uh allow the system to do like a tight head shot, a medium shot and a wide shot. And when it makes these cuts, which normally would be gaps that have to be cut in and cut out, it just saves a ton of time there. So, hey, um I'm gonna go one more thing we're gonna talk through here. We'll do a little Q and A, we might go a little long. These guys will be around. I think we may do a Q and A. So there also, but I wanna talk about packaging, analytics. We talk about that. IQ we talk about spotter. Um What other tools are out there that you think are working? So if we can help you do things like headlines, descriptions, thumbnails, but also let you look at your analytics, whether it's from youtube or others and kind of make sense of all the data that's coming in. Who wants to take that one on? Yeah. So this isn't for the faint part, but there is amazing tools in uh open AIs API that aren't super complicated to deal with where you can actually connect an API. So one thing that my friend has done and he showed me how cool this was is he has connected his uh youtube channel and the entire analytic system into a sandbox environment, a tiny little piece of server that untouched by everything else. And he can query chat GP T to say, go look through all the comments that have been made on my channel, look through all the things that you know about the demographics and give me some tips for how to create a video or something like that. And I don't know if any tool that to put that together and maybe I could do something like opus films or something in the future, but it's, it can be cobbled together now and it's probably coming to uh there is a tool that's doing something like that. It's called super dot fans. You could just look into it. I am, you can look into it. ==I was, I was just going to mention them as Yeah.== So there's a lot of analytics tools out there. I used to work for a company called Two Buddy. There's fit there's Spotter, there's youtube directly. So lots of to lots of different places you can get analytics. But what I'm really excited about is the companies that are doing something interesting with the analytics in terms of identifying things that you wouldn't normally be able to identify yourself. So Super Fan uh is basically figuring out what products and services you might be able to offer your audience, that isn't really something that the other analytics platforms are really offering. And so I think if you can find those types of companies that are identifying something different, it's also um building out the the leader board of showing the fan interaction so that you actually know who the top people in your ecosystem, they are interacting with your content are. So that maybe you can give them extra awards, extra attention and it's giving you um more access to directly interface with them. So you can think of it as a content creator is equivalent to AC RM. Um You know, you, you could think of it as instead of a customer um relationship uh management tool, you can think of it as a fan relationship management tool. You know, tiktok has rolled out a tool that they're experimenting with where there's an avatars that sold your products, it gets each person a custom message and they are starting to measure what kind of messages can work better. So stuff like that around the corner. So the the thing about super is that we'll go in and look at your comments and things like that and analyze it. Uh Do you guys know of any tools that are helping people manage the communities in general, whether it's on Discord or on youtube or other things, there's a lot of tools being built into the platforms themselves. And so there's tools that are being built into youtube and tiktok. Uh I know Punjabi, I know I've been to Punjabi a few times here, but uh Hijabi has a community feature and they also have AI tools that can, can help on that side as well. So I think it's mostly being built in totally, I, I think that community the, the best AI tools for community are gonna be ones that help you identify patterns, things that you wouldn't otherwise think because I don't think you can separate the human component from the community or that you can outsource the relationship you have with the community. I don't think that's what you should desire from AI. That's why I think it's hard to build AI tools around community. I think we have to look at AI tools as a way to look at who we're leaving out, who are missing also who we can double down on who and all these top interactors. That's what I like about what Super fan is doing and also uh even opportunities that you can have to do better. Uh So I think the pattern recognition aspect of AI is vital when it comes to community. But I don't think we could ever do anything to truly outsource that relationship. And I don't think we should desire to mining networks is leading into this a little bit worth a look at. Are there any questions out here? We got a question right here. Go ahead, sir. You, you talk about using uh 3D avatars motion track and uh