Making a 3D Virtual YouTuber
Here's a diagram to give you an idea of how to make a 3D Virtual YouTuber. I'm hoping it helps people who are lost about where to start. Things don't necessarily have to happen in this order and you don't have to use Unreal Engine\OBS but this is just one example. There is a lot of different hardware and software available to do the same thing.
Art and Rigging
Let's assume there's a different person working on each stage. Some people might want to start from the 3D stage that can work too if the artist is comfortable with that.
First you'll need a character designed. Typically this starts in 2D as a mockup can be done pretty quickly and it's easy to show others what the character could be. The concept art will show the character's personality and design. It can be used to inspire others working on your project.
With the right group of people you could work straight from the concept art but the classic approach is to make a model sheet which shows in detail exactly what will be modeled. Think of this as instructions for the modeler.
3D Modeling + Textures
At this stage the character is literally sculpted inside your desired 3D software. Imagine a greek sculpture that's all grey. Once that's finished you can "colour" the model by adding textures. There are also something called shaders which will take textures as input and combined with some other things will make your model look different. Why does Kizuna AI look like 2D? Because the artist set up shaders that define how to render her in 2D style, probably lighting affects this as well.
This is where you tell the computer how your character can be animated, you are defining the limits. This can be done using a combination of bones and blendshapes. Bones are used to define where and how the character can bend. Blendshapes tell the 3D sculpture or "mesh" how to change shape.
When you take all the systems and work involved in 3D motion capture it becomes very expensive. Especially if you want everything to be fully animated including face, body and fingers. There's a variety of systems that handle these in different ways. For a list of options check here:
Typically these systems come with plugins and instructions that will help you with streaming live motion capture data. They are partly plug and play but it can become more complicated based on your requirements. Your character lives in your desired software package(Unreal, Unity, Maya etc) and it receives realtime input from the motion capture system via a plugin. Once it gets the input it starts moving around just like you.
Realtime Rendering Software
You'll probably want to stream your avatar into an engine like Unreal or Unity to make it look great and give you lots of options in the future. With some motion capture systems you can stream right into their specific software but it will just be more limited in terms of rendering and functionality. You could also stream straight into Maya if that fits your needs.
If you're using a game engine you can set up lighting, shaders and logic there among other things. Basically you're character will be inside a video game so there are lot's of possibilities. With this scenario you could set up various types of logic like if you clap your hands the background will change or when you smile an npc will spawn in the virtual world. Basically anything.
This is not my area of expertise but you'll need a way of recording your voice. If you're pre recording content that you'll later upload it's easier as you can essentially record the audio on any device and align it with the video later in a software like Adobe Premeire. Quality will vary depending on device.
If you want to stream your audio live you'll have to connect your audio device to your computer so a streaming software like OBS Studio can pick it up and use it.