2026 Predictions & LTX-2 4K Open Source Video Model

Pierson Marks (00:00)
hello, hello. It's funny because I always post-process these and we add the intro music in and I just hear it in my head right now and it's like, it's going, the intro music. But happy new year, happy 2026, episode 26 for 2026. Pretty cool.

Bilal Tahir (00:01)
Yeah, hello. Happy 2026.

Yeah, wow. Yeah, we did it. We made it to 2026. Who would thought?

Pierson Marks (00:23)
It's amazing. It's going to be a year of generative media. And this podcast covers all of it from images and video to audio and 3D graphics and all of the above. if you want to, know, if you're listening to this the first time, we try to stay focused on a lot of the generative and creative fields. But AI more generally is sprinkled in. yeah.

Bilal Tahir (00:45)
Yeah, yeah, I mean,

it's crazy how much like if you just recap like 2025 when we started, we started middle of the year, even for like, what did we even have last year at beginning of the year versus what we have now?

Pierson Marks (00:55)
I look, mean, I have

the agenda pulled up. The first one was V03, which was just come out. This was like last summer. 11 Labs V3, which is crazy. 11 Labs V3 is still on alpha, even though it came out when we did episode one, which is nuts.

Bilal Tahir (01:01)
It just come out.

Yeah, come on in, EvanLabs

Pierson Marks (01:17)
I know, but yeah, it's very cool. We got everything from image editing and Suno and Grok 4.

Bilal Tahir (01:28)
Right. Yeah, I mean, some of the

big moments, I think, because we basically we started 2025 with cheap images were kind of there with the flux. think flux, maybe not flux, but stable diffusion, at least the ST turbo was there. And then flux quickly followed with the Chanel, which was amazing. But then it wasn't like the highest quality and the quality bar for like ridiculously cheap images has just been pushed more and more. Flux 2 has come out. Quinn has come out.

Pierson Marks (01:44)
Right.

Bilal Tahir (01:55)
image editing was not solved and you could argue that Nano Banana and even slightly before Nano Banana, it's gradually just kind of solved it. So I think if I look back at the moments, the two big moments I think for generative media were image editing being solved and then video, cheap video. I mean, we had VO3 kind of, but we didn't have native audio and the quality kind of sucks. So video like amazing quality video where at least...

convincing someone for five to 10 seconds that this is like Hollywood level video. I feel like that threshold was weak. Maybe not long form, maybe not 30 minutes yet. Maybe that happens at 26, but at least we saw the first convincing one to two minute short videos with amazing audio and sound effects happen. So those were the two big, I think, breakthroughs for me.

Pierson Marks (02:37)
Right.

Totally.

I mean, talking about the style videos, call to talk about the Hollywood style production, like Runway and Adobe have this big partnership now. So like you can generate runway video in Premiere Pro or whatever.

Bilal Tahir (02:50)
Yeah.

Yeah, that's interesting.

I don't know what they'll do with Firefly. That kind of takes a back burner, and they just use 4.5 now.

Pierson Marks (03:00)
Totally.

I mean, I think it makes sense for them. Adobe, like, do they want to be a research lab? Probably not. Like, just let the research labs do the cool stuff and then integrate into a UI that's, you know, already used. You already have the platform. Just add the features and keep your platform sticky if I was them, you know. ⁓

Bilal Tahir (03:18)
Yeah, I mean I think Photoshop is one

like it's like Microsoft Word or Excel it's not gonna die It's just gonna get more and more bloated and people will complain, but they'll still keep on using it

Pierson Marks (03:27)
Totally.

I mean, like, yeah, we'll talk about this with like the Quinn image editing, layering image model. before we get into it, wanted to just, you we just were recapping like half of 2025 and being the first episode that we recorded in 2026, I think it'd be interesting to think about like, what are our predictions for 2026?

Bilal Tahir (03:48)
Ooh,

if we make predictions, put it out in the air, then we come back.

Pierson Marks (03:51)
Yeah, we'll put

predictions and then, you know, at the end of next this this year, start of next year, we'll look back and be like, what came true? What didn't come true? So ⁓

Bilal Tahir (04:00)
I know that was very annoying on Twitter

because everyone started doing their well two things happen apparently the Twitter algo changed and they were started rewarding articles instead of tweets like long form articles and then of course 20 ways so I just saw like hundreds of like oh this is my 2025 and these are my predictions like god damn it I mean it's like one of those topics you want to you get hooked into and you want to read but at same time I'm like guys I can't read all this shit you know

Pierson Marks (04:27)
Right,

totally. ⁓

Bilal Tahir (04:29)
Yeah, so

yeah, what are your big predictions, I guess?

Pierson Marks (04:32)
I think one

of my big predictions is that, so I don't know if this is as much a prediction. think world models will become more important. think we'll see, know, world models became a thing in 2025 with Ipele's world labs and marble. You got a genie three in Google, which is still not publicly released.

Bilal Tahir (04:43)
yeah, that became a thing.

Well, I would say they became a

people there was a proof of concept. I won't say they haven't really People aren't using them. So maybe that happens ⁓

Pierson Marks (05:02)
Well,

mean, Marvel is being used. think the usage stats on Marvel is really pretty solid. Like you can use it today. You can't use Genie 3. And they're different. mean, Marvel does like the whole Gostian splatting and there's not like interactivity. And Genie 3 is more of like, hey, we're generating like this video game thing where you can actually walk around and interact with the world. Like painting the wall. That was like the cool

Bilal Tahir (05:07)
really?

Pierson Marks (05:32)
thing

where you saw a paint roller blue on the wall and that stuck around for know however long the space or the temporal memory was like a few minutes but I think that like maybe by the end of 2026 we're going to start to see integration of these world models into more interactive environments and video games I think maybe you'll see that some people figure out how how can we use a world model in a way that we could build a video game with

Bilal Tahir (05:36)
Hmm

Pierson Marks (06:02)
without actually having to program the scenes and build all these 3D models.

Bilal Tahir (06:05)
That would be amazing.

Yes. No, that would be crazy. Sick. OK. So that's one prediction. Any more?

Pierson Marks (06:11)
I think retro gaming is going to make a comeback. think I saw something today. This is not really AI related, but kind of. RuneScape, it was one of my favorite games back when I was a kid. It's all time high of players ⁓ on RuneScape, like old school RuneScape. if you haven't played it, it looked like, you played RuneScape?

Bilal Tahir (06:24)
Hmm.

No, I've heard of it. It's kind of like Minesweeper. My Minecraft site.

Pierson Marks (06:35)
No, no, no.

It's like imagine, do you know Skyrim? So imagine Skyrim but with like a 2.5D isomorphic sort of play and it's low res and you have like your knight or whatever and you.

Bilal Tahir (06:39)
Yeah.

God, right. Yeah. But you said making

a comeback. What does that mean, making a comeback? Is there an AI version of that that makes it more popular with the kids? ⁓

Pierson Marks (06:57)
I

think retro games or like indie games are going to explode because it's going to be one easier to create these games. My Twitter feed right now is just filled of people building these games through cloud code, these vibe coded games that some may take off. Like some may be like an individual one person indie developer taking old source code of the original roller coaster tycoons or building sort of these games as a single or

small single developer or small team with AI and actually allowing us to like relive some of these cool nostalgic games that everybody played and then sprinkle some LLMs in it like Rollercoaster Tycoon or you could have like interact with these NPCs that are now more intelligent that actually take better actions they're agentic they're not just kind of these program things that they walk around like they actually kind of each have some sort of memory and thought process and why they're doing

things all the NPCs and then adding that into the old video games because the old video games IP and everything was really cool and people like myself I would love to see like a Pokemon where all the NPCs actually remember you and kind of a more dynamic result yeah crazy stuff

Bilal Tahir (08:08)
Yeah, I mean,

can totally, mean, there's those Williams compared to the compute we have right now. I they're easily, you can run them on NES simulators or whatever.

There's no reason why you can't just wipe code your way and just have a layer to change the screens or the worlds, et cetera, and just relive a Mario in a different world, et cetera. There probably is ways you can kind of do it in a clunky version of that already, but probably gets more nuanced and stuff. like, it reminds me of Peter Levils did that flight simulator thing that took off for a while. I don't know what happened, but he was literally advertising on blips and he was making money out of that. That was crazy.

Pierson Marks (08:27)
All right.

Yeah.

Bilal Tahir (08:44)
Yes, I am.

Pierson Marks (08:46)
Like that could be the thing where instead of like that like voxel based pixelated sort of environment their world models that actually look like high res, know GTA 5 GTA 6 style like they're actually like really good and they're not like these low res and stuff

Bilal Tahir (09:01)
It's

interesting because I feel like a lot of people they deliberately like the low res. It's kind of like memes. know, like memes are like from Windows 90, 95 versions. I feel like there's something about the low res retro kind of aesthetic that sticks. So but I agree. mean, there's a difference. I feel like it's a different type of thing. Like, do you want to play and you want to build underscore scroll, right? Hi res, whatever. Or you want to do Zelda. And when you want to do Zelda, you probably are thinking like a 95.

or age of empires too or whatever, right? You want to have the 90s kind of aesthetic for that. So it's interesting.

Pierson Marks (09:35)
Totally. Yeah. Well,

yeah, so I guess those predictions, the gaming industry, and one of the things I think is not going to happen, I don't think we're going to see these generative UIs. Some people are saying this, like, what you see on your screen is going to be different for me versus you. And the AI is going to make the interface as the right interface for what you're trying to achieve at that one point. I don't think we're going to see this in 2026.

And so that's an anti-prediction. So an anti-prediction, you know, of what we're not going to be.

Bilal Tahir (10:00)
Yeah, I agree. And what,

yeah, yeah. I think it's more that I feel like that's just human nature. I feel like we are like, people like having standardization. So yeah, I agree. That's not going to happen.

Pierson Marks (10:14)
That's what design. Yeah. So

any predictions on your end? What's what are your thoughts?

Bilal Tahir (10:18)
Well,

for me, think one of the key things I'm kind of basing a lot of on, well, two things. First off, in terms of scaffolding and stuff, like we've talked every week, we talk about different types of techniques people do, like, you know, coming up with them.

like, you know, different keyframes using like the grid as keyframes and stuff. I think that's going to obviously people are just going to figure out more and more stuff. And so we'll probably just see a higher and higher quality. We could even see, I can definitely see us getting really Hollywood level, not Hollywood, but like.

almost like award-winning level type of shorts, at least 10 to 15 minute videos, not just commercials, but actual storytelling. I predict we'll just see some really truly viral, amazing shorts, which is gonna blow people away. We've kinda had hints of that. There was this controversy where some video game developer who won Video Game of the Year, they got banned or something because they used some AI to generate some assets, which was ridiculous. People were like, what are you doing?

see, I think we'll just see basically a demarcation where people just have amazing creative shorts and you can't really deny it that this is just amazing. So the argument, AI produces like just only slop con a generative content kind of it, it'll be very hard to defend that. the same time, you'd probably see a lot of anti AI people like double down on that.

I think especially you're gonna see in industries like graphic artists and stuff getting laid off and stuff, it's just gonna get to a very personal, very new level. And it just comes from people like feeling, obviously their job is rather lively at this time. So I think you're gonna see people being very skeptical of good content, even and not really judging it on objectively. But I mean, it's hard to say where that goes. ⁓

Pierson Marks (11:48)
All

So.

Totally. No, so I'm going to be

curious. I'm going to ask you a more pointed trying to predict on the timing here. So do you think at the end of 2026 that the sentiment of AI creation and create like creative assets or whatever, it's going to be worse or better than it is today? I think like at the end of 2026, like it seems like you think it's going to be like, it's going to be worse at some point in 2026. But do you think at the end of 2026, it's going to be worse or better?

than it is today.

Bilal Tahir (12:33)
I mean, honestly, it's like hard to predict. Like I feel like it's easier to predict chip production versus the human nature. I can't see it like, maybe I mean, but I just can't see it going down in any term in the short term. you know, unless like something, something happens, like what do you think? I feel like you have an idea here.

Pierson Marks (12:40)
Hahaha

Okay.

Well, the ice cream on the sentiment

level, because I think sentiment already today is pretty poor on AI generated content. There's a lot like this Hollywood strikes. There's the video game that you just mentioned. And I think it'll continue to get worse. But I wonder if it'll continue to get worse. And then you'll see like in October or September, it starts to turn around again. People start to actually take advantage of the tools. You see more content being created. And the sentiment starts to shift where at the end of 2026, maybe it's

Bilal Tahir (12:57)
Yeah.

Pierson Marks (13:22)
you know, actually slowly becoming adopted and better. Because a lot can happen.

Bilal Tahir (13:26)
mean, eventually you kind

of have to join it, but I do think what we'll see is definitely, and we probably already kind of see it, there'll be this like old thing about, human only content, whatever, people will try to craft some sort of a, you know.

people have tried it on social media, whatever, maybe there's like a, it's kind of like Etsy, but like for human only crafts and stuff, they'll try it and they'll fail miserably because I mean, that's just how it is. Well, you know, people would say, yeah, we only want human artists, you know, we don't want AI and they'll try it. And maybe some people will actually believe it. Other people will virtue signal about it, but at the end of the day, the economics will just not work. And so there'll probably be an effort like that, which kind of flops, you know, same with like, it's kind of like the blue sky, whatever.

version of that. Another thing I do want to mention though I think which will be very critical I think in terms of improvement in video especially is going to be the launch of GB300s. So right now the way all basically all models from LLMs to images to video are based on NVIDIA H100s chips or

A100 sometime, which are the older ones. And if you're on Google, you use TPUs. At least on the Nvidia side, what is really exciting is that Nvidia's Blackwell series will come online for inference in basically end of Q1. And now the way it works is the Blackwell series was already shipped to the labs. So they are training their new models using Blackwell, which is basically a way better chip, like way better bandwidth memory. I'll talk about in a second what that means. But...

The way it works is you train a model which takes about six months and stuff. the prediction is, mean, most likely XAI, the Twitter's AI lab company over Grok, they're probably going to be the first one to finish training just because no one does hardware like Elon. And so you'll see, because they've been able to create these amazing, spin up these data centers very quickly in Tennessee, they were the first

wants to get the Blackwell chips, they were the first ones to train it. So the training takes six months. But the thing is, once you train a model, have to then what you do is you take the chips and then you switch to inference. And Blackwell is both, usually back in the day it was like there was a trip chip for training, H1, and then there was a trip for inference. With Blackwell, you do both.

And so you're going to train this model and then you're going to switch to inference. So we'll see the inference start around late Q1 when we'll see the first instances of what it looks like to use run inference on BlackBall. Why is this important? Because why is there a ball neck of five to 10 second videos? It's all memory. It's all memory and stuff. And that will pump up. We're probably going to see 30 to one minute videos being one-shotted. And that's going to be a huge game changer in terms of...

consistency we've talked about that I can so the workflows people I I opened with my prediction about the workflows getting better but then you simultaneously just have the floor being raised just from the sheer ability of the models being able to output a longer video quicker so cheap 30 second one minute video generations that's gonna be a huge unlock

Pierson Marks (16:27)
Hmm. And that's a memory that's because of the memory bandwidth increases on these new chips significantly.

Bilal Tahir (16:27)
and yeah.

Yeah,

would be close. The memory wars right now is just crazy. If you look at SanDisk or MUs like price, soft price, it's just microns price. It's just skyrocketing because memory is the bottleneck. You need huge amounts of memory. And the other thing about people don't understand about inference is most of the models use thinking now. So thinking.

The thing about inputs is like when you input a long context you can parallelize that. When you output, you can't parallelize that. You actually only have to output through one chip and the problem with that is the bottleneck becomes that one chip. So when you're thinking a lot decoding, that's been a fundamental limitation right now and that's why a lot of the labs have been bottlenecked. With this Blackwell chip they'll be able to think for longer, faster and be able to offer the inference which

cheaper. So it's going to be that I don't think people have understood quite understand how that the capabilities that's going to unlock for GP 300 and that's why people are racing to.

get that setup going. The other thing is optimization. By the way, there's this person called Gavin Becker. has this podcast about this. I'll link, you know, send the link. He really is one of, I mean, semi-analysis obviously we didn't know as well, but he does a very great high-level analysis of this. So I encourage people to get into understanding this because I think when you understand the chips and the memory thing, everything else kind of falls into place in terms of like what's happening in the industry. there's so much alpha there.

Pierson Marks (17:57)
interesting and and those are okay so cool so those will come online impact video gen impact ⁓ inference

Bilal Tahir (18:06)
Yeah. Yeah, and on

Google's side, they're working on TPU 7, I believe. Is it 7 or 8, one of the versions? And that'll obviously be the one.

Pierson Marks (18:12)
And they'll be selling those, right?

Are they selling them? Or what's the deal?

Bilal Tahir (18:15)
They're selling them to Anthropic,

which is also a huge deal. I mean, the chips are so, they're so bottleneck. They're like, everyone's just who used to each other. Like Google and Anthropic, were not friends, but you know, they're like, hey, they're like, we need the chips. So Anthropic is going to be using TPUs. They're already using Tranium as well, which is Amazon's shitty chip. Well, relatively shitty chip. And Tranium will have another version as well, which is made by Broadcom. Broadcom also, their stock is going up. So there's this so much.

interesting shit. There you go.

Pierson Marks (18:41)
Well, there you go. You feel like a Bruins. Yeah.

Well, yeah, let's

see the chip wars come if you haven't read the book chip wars by Chris Miller is pretty interesting It came out in 2023 right about when chat GBT came out and it was like right up to there So it's very interesting now two years later. I wish I hope hopefully there's a revised second edition of it or a second version because the last few years the chip wars have definitely Increased so Chris if you're watching this make a second version of your book make it come out in the end of 2026 talk about

all of us.

Bilal Tahir (19:14)
yeah. Yeah,

it's just honestly, it's the most interesting drama on the planet. you know, I mean, it's better than any Netflix show, from geopolitics to business, like to actual like product, like AI, like, it's just everything is connected, right? And just following that circle is fascinating.

Pierson Marks (19:32)
It's,

know, and you can't follow it. like my, my whole thing is that there's so much going on in the world. There's so many people doing so many things, news, new products. Like I have, like I get subscriptions to like hugging face and like every day there's like 10 new models, 10 new papers, all these things that can just like overload your brain and you can kind of feel overwhelmed. I mean, I, at least this is how I feel sometimes like, Oh, there's so much shit happened on in the world and it's, can't,

Bilal Tahir (19:40)
Yeah.

Yes.

Pierson Marks (20:02)
I'm falling behind. know Andre Karpathy talked about this too. It's like, feel like I'm behind based on my level of skill as a programmer because I don't know how to take advantage of all these tools well. And it's like this knowledge, it's like a knowledge gap and like, you know that you can take more advantage of these tools. And it's super interesting because like at the end of the day, one,

Like you kind of have to be like, okay, filter out the noise. Everything is noise. Unless you're on the floor, like building or like, like there was a great saying by, who's the diary CEO, guy, the podcaster. shit. What was the name of him? diary of a CEO, ⁓ Steven Bartlett.

Bilal Tahir (20:43)
Dari? Darius.

Pierson Marks (20:48)
So Stephen Bartlett has this podcast, diary CEO, very cool. You know, he's had like every CEO on there essentially. But he was just like, while my competitors look up to the sky on the moonshot things, things that take a long time and are very hard. look one step in front of me and just make that next step. Filter out all the noise, compounding growth, 1 % better every day. While the other guys are shooting for the fences, 100 % it's like.

Just focus on what you can control right in front of you and just get a little bit better every day and push yourself and just filter out all the other stuff.

Bilal Tahir (21:24)
Yeah, know. I filtering is honestly,

filtering is so important because I do a terrible job because I'm addicted to Twitter all the time. So I'll just be looking at the articles and posting. I'm like, can I read this? But at same time, like you said, most stuff is noise. And you just need to be able to filter.

like and really concentrate and at the end of the day, I feel like the people who do best are the ones who balance that their output versus their input like, know, because you can like read all day, you can listen to all the podcasts you can, but at the end of the day, you know, the way you truly learn and you truly

move forward is when you just try to produce something, whether it's developing code or tinkering or the different project, hardware, whatever, or writing an essay or something, just thinking through. so, there's this funny, I remember somebody, was one of the founders of IndieHackery at the Sake user. You should produce more than you consume. So what he said.

And I was like,

Pierson Marks (22:16)
Right.

Bilal Tahir (22:16)
yeah, happens to me when I go to Chipotle.

Pierson Marks (22:18)
Yeah, no, 100%. But, well, cool. Okay, so let's get into some of the things that were happening in the last few weeks. We've met.

Bilal Tahir (22:28)
Yeah,

yeah, we should mention, so we're starting, this is early gen. I think Christmas vacation was very interesting because a lot of people simultaneously realized how great Opus 4.5 was. Apparently it's been a step change moment, like kind of like when GLOD 3.5, guess came out, Sonnet 3.5 came out and was like, whoa, wow, the model is actually good now. And I feel like I felt it too. It was amazing. I was like playing around with it all my break and it was just.

Amazing to see what it could do and I do what you mentioned Capote early. The he made made that post about being felt like he was being left behind because he was playing with Opus 4.5 because Opus 4.5 is such a great model. But at the same time are tooling. I feel like the limitation now a lot of people isn't that the model isn't smart enough that they the tooling around it or we aren't smart enough to really tap into its superpower. And I felt that too where I feel like guilty where.

I feel like I'm not harnessing, it's like having gold like right there in front of me and I just don't have the shovel to like pick that gold, you know, it's frustrating almost, you know?

Pierson Marks (23:30)
All

Yeah, no,

totally, totally. mean, like what I like, why I really liked Claude code. Like I remember like six months ago, I was like to you, was like, Hey, go like try Claude code over cursor. And, um, cause it was really, really helpful. think the harness there and the way that they manage context is so good. Like the context engineering, I still think is probably my favorite thing that came out at 2025 that just really forces you to think about the model is the model and

All it knows is what's in its context window. so managing that context window well by being explicit, like taking a step back when you're prompting these models to really think about, am I doing a good enough job explaining what I want to accomplish to these models? And so rather than just, like it doesn't have, like at least I make this mistake all the time where I assume that because I can think it,

what I have all this thought in my head is correctly translated into the one sentence that I put into the model. And that's just like a complete fallacy. And it's just not true at all. But I'm like, wait, why didn't you understand what I was trying to do? Like what I have in my head when I put a sentence into your model. there's just, it's like, that's the only context it has is a sentence. Yeah.

Bilal Tahir (24:45)
Right, yeah. And even the

phrasing and the grand variety, I don't know if you've noticed.

I'll like, quad will do a great job. And then it routinely does something called compacting, which is like when the context gets long enough, it summarizes all the things. And you notice an immediate degradation in its ability because it had all this granular back and forth. And then all that gets summarized to maybe a few paragraphs. And then it'll make that same mistake. I'm like, we fixed this last time. Why did you go back and just make the same mistake again? It's because it lost that context. to my earlier point about GPT-300s, I I actually think

Of course the tooling and stuff is getting better, but context engineering, do think is one of the big things we're gonna see get better and better this year. And there's a opportunity, but just the raw capability of the context, the model can adjust is gonna just get long. So hopefully that problem, the floor gets raised up on there as well, where maybe we're just sending in 10 million tokens and it's not a big deal each time. just, know, so.

Pierson Marks (25:41)
Yeah, totally. mean, I'm gonna take the other side of this. That I think that the context window is not gonna like maybe it will get better and bigger and stuff. But like, at the end of the day, it's like a human, I think in the sense of if you give it so much stuff, I just don't.

You, it just, the more, the more you stuff in the context window, the more explicit you have to be in directing the model because it's overload, information overload. Like you could have conflicting documentation. You could just like, there may not be one right way to do something or like there's, when you just overload something with so much context, it might not just, it might just not do the right thing. and, right is like,

Bilal Tahir (26:04)
Right.

Pierson Marks (26:25)
You're on your perspective. Like I'm like, Hey, like I get why you did that. Like that makes sense. But that's not what I wanted you to do. And so it's like, it was right, but it was still wrong because I didn't explain well enough what I wanted. And then it got confused and it went down this other pathway because of everything that was stuffed in the context window, you know,

Bilal Tahir (26:42)
Yeah. I mean, that's

a good point. Maybe the ultimate ball, the way we solve it is when we have Neuralink. we literally, I don't have to use my mouth, but you can just look in my brain, just map it, and this is what I wanted.

Pierson Marks (26:51)
I mean.

Yeah.

And I kind of agree. You wear a hat, your thinking cap, and then you type your prompt. You type your prompt, you have your thinking cap on, and the prompt is like the prompt, but your thinking cap is all the context. That'd be wild. But yeah, try out Cloud Code, everybody.

Bilal Tahir (27:03)
Haha

Yeah.

Yeah. But I do think there's something.

Another thing I want to mention about this, Steve Jobs had this quote, I remember. He said in the 80s, he did this entry, and he was like, in every field, the best person is maybe two or three times better than the average person.

in software engineering, and he said this in the 80s, the best person may be the 50 to 100 times better than the average person. was, the context for this was the interviewer was asking about why you pay, he was talking about his A players and why is that important to have the best people on your team and that's why Apple is so successful. I.

And as I was going through all these posts after Christmas, I was looking at these amazing, some people had this insane, I'm running five Claude code maxes in parallel and all this shit. I generally believe we're gonna see that multiple go up to maybe 500,000, 10,000. And that sounds great. I imagine someone being 10,000 times more productive. Like that's an insane thing. But I feel like with these tooling, that leverage, that multiple is gonna grow up and that's gonna be an interesting.

Like obviously the first order effect is you're just going to see people like crushing. Some people are just going to be dominating. We've talked about the for one person billion dollar company might happen. mean, I don't know. That would be a stretch thing. mean, I'm an optimistic person, but who knows? Maybe someone does something crazy. I to pretty early with one person, but.

Pierson Marks (28:29)
You know what I don't think it's that

crazy though? I just want to point this out. Instagram got bought by Facebook in 2009 and they had a team of like a dozen people and that was a billion dollar acquisition. And that was, you know, 15 years ago. think it's possible. I do think it's possible. you're a professional, it's like a path. It's lonely.

Bilal Tahir (28:36)
Rain. Rain, yeah.

Right, that's true, yeah. Yeah, no, I mean, that's true. That's a good point. That's a good point. Same with WhatsApp. There are like 49

people. But the second order effect will be interesting because when you truly, I mean, obviously people are talking about layoffs and junior engineers are not getting opportunity, but like, this is definitely gonna exacerbate where it might make sense for like, if you're like a company, like, just need to, I can literally, it's not even like laying off five people for one. I can lay off like a whole team.

and just have one guy or something like that. So the second order, in fact, where you truly see the Pareto's law where maybe, it's a very interesting thought experiment. if like 20, basically you are very, like, you know, maybe 20 % of the population can basically run the economy and the 80%, they're almost like a net negative.

I mean, it's just, I'm not saying good or bad. I'm not trying to make a moral judgment. I'm just saying just purely based on economics and what things work. Some people are, we know intuitively, some people are just better at some things than others. It's just a bitter thing.

built to swallow, but that's just the truth. And when that becomes, yeah, I think just makes it, it will just make that way more stark, you know? It's kind of like LeBron James versus, I mean, you know, yeah, he's six, nine, he can dunk, I can't, like, that's an obvious thing. But in other ways, you know, I feel like you hide that with white collar. You're like, yeah, I can do that. I can write that essay. can code like that. I feel like that's just gonna become way more like, nah, I can't do that. I mean, some people just say, So.

Pierson Marks (30:14)
Great.

Totally.

Yeah, no, it's super interesting. I've always believed that like in many skills, like all skills almost are there's a predisposed like genetic range essentially where, you know, you're born with some strengths and weaknesses and some of them are natural. you know, I, my parents are tall, so I'm tall and that's just like a genetic thing. That's a fact. And some people are shorter than others. And some people may be, you know, like in any, in any dimension, you know, you

skills and strengths and weaknesses. But I don't think those are static. And so I think that like, you know, you're born with some potential, and you can then decide how much of that potential do I want to realize. And I think what most people fail to realize is that like, just because today I'm not something doesn't mean that tomorrow I can't be or tomorrow can't be better. Like I had a close friend in high school. He now works on Washington DC, he advises the White House on AI policy. He's an incredibly smart person, like one of the smartest people I ever had.

Bilal Tahir (30:50)
Yes.

Pierson Marks (31:12)
the

privilege to know growing up and he joined choir as freshman year and I was like like what do you like why are you joining choir you suck at singing like this is a guy that I played Minecraft growing up with every day after school and like we did all these things together and it's so fun and he's like yeah I'm doing choir for my art and he was horrible singer I'm like sing for like sing and then at the end of senior year he liked it so much he practiced and this year this guy amazing voice and I know it was

wasn't like any, he's not gonna be like the A-list American Idol, but like it was a thing that most people say, you're a singer, you're not. It's not binary. Most skills aren't binary. There is, you can actually work and it'll be the same thing.

Bilal Tahir (31:51)
No, I agree 100%. And it's an interesting,

no, I 100 % agree. And that's very interesting. I think it's like, comes down to two things. There's stuff you wanna do that you should do for fun, that you shouldn't be limited to just because I won't be top tier at it, I shouldn't do it. But then there's super, I do believe everyone has something that they're amazing at. Everyone has that talent. And I think the way our education system works at least, I feel like it's like, if you get...

Pierson Marks (31:59)
All

Bilal Tahir (32:18)
is when you get one C, when you go to the parent-teacher meeting, they're gonna be like, why'd you get a C, right? Let's just bring that up to a B, even out. That's kinda our instinct is to work on our weaknesses. And I feel like that's, that doesn't, I'm not saying you shouldn't be blindsided. they're truly weaknesses, you should look at them. But at same time, I feel like a lot of alpha is sometimes just tripling down on the one thing you would naturally get at, that subject you were getting an A in without working. Like.

just do more of that, right? I feel like, and so it's, in this year, I feel like if you're just from a personal development standpoint, if you guys, if anyone listening is like, I urge you to find out that what is your super strength? One thing that comes to you effortlessly and then really try to like hone that with AI and stuff, you know, cause I feel like truly that's what you will hit that 50, 110, 1000 multiple with.

Pierson Marks (33:07)
Totally. No, it's interesting. I read an article about this exact thing and actually would take a different perspective on the A's and C's thing. I do agree that the educational system and how you are rewarded as a child is maybe not the best because we do need like people by like they do have better skills like other people might be better at math or English or whatever and you need to make sure that those people don't feel like there's like a ceiling that they can't blow past in their classroom or at home and they

have the resources available to really focus and like they do that. But on the other side, as a child, when you're growing, I think it's very key to focus on the weaknesses. There's been some studies about this where it's like, if you overly specialize children at a young age, on average, they plateau much earlier than somebody who's much more broad and generic, maybe a B student across all these domains. And they kind of are more of a generalist

and have broad knowledge, broad exposure to things like maths and sciences and music and arts and athleticism and all these different characteristics. And over time, like there's some really cool studies about like how those individuals as children, when we're general, maybe they weren't superstars in any one area, but then find later in life what really their strengths are. And those strengths become much more than like the people

who have strengths in that one area when there are children and focus on that. It was interesting, some interesting studies.

Bilal Tahir (34:33)
Yeah, no, I

agree. mean, there's something about inter, and especially with, I think a lot of discoveries is like interdisciplinary. So having the kind of holistic education. I agree. I mean, I agree. I mean, there's definitely that side to it. think as anything, this is like where balance is key, right? I mean, you need a good foundation with probably with a lot of things, but then you should have like the T shaped as the corporate logo, say.

Pierson Marks (34:55)
All right, all right, all right. Totally.

Well, OK, sweet. So we're discussing about a lot of this stuff. I want to talk a little bit few. mean, there are some cool things that came out this past week. There's LTX2, which was the first truly open 4K video model with synchronized audio. So you want to dig into that?

Bilal Tahir (35:06)
Yes.

Yeah.

Yeah, mean, so native audio, know, LTX kind of released LTX too fast, I think. It's been a while, but they came out with the 19 billion open source model. So they had this whole closed source base model that you could use, which had native audio, 1080p resolution, I think 50 frames per second, which is kind of insane. I don't know how many frames you need, but you have it now. And...

They released a 19 billion parameter model. There's a base model and then there's a distilled one on file, which is even cheaper. And that was great for the open source community. Our local llama subreddit was really celebrating. They actually have the Lightrix LTX2 company, called Lightrix. CEO just did an AMA there. I was reading through it. And they love that because the best model they had before this was WAN 2.2. And then WAN...

came out with every version since then from 2.5 to 1.6, they have closed sources. so obviously the open source guys were banging their fists on the wall. They're like, what the hell? Where's our open source model? And so LTX2 has taken up that mantle now. And great video, like you said, synchronized audio, which is awesome. I think that's going to be a very key thing, having these native voices and having control over it. Interesting, one thing, the problem with the

Native audio is you could have different voices in different clips. Kling actually came out with a 2.6. They had this interesting thing called voice ID. So what you can do is you can create a Kling voice, and then you can pass a voice ID parameter, and it'll use that voice for that character in all clips. And I thought that was a very interesting way to solve at least consistency for audio generation. And I bet you're going to see something similar with other models too, like that.

Pierson Marks (36:57)
So on that

Kling version with the voice ID, would you, so if there was two characters in the video, would you identify a character with a voice ID? Like if you wanted to make sure that two characters had the same voice.

Bilal Tahir (37:08)
Yeah, I

haven't tried the voice ID part, but I imagine there's like an alias or something you use like, you know, he says this, et cetera. You know, it's something I've been thinking, I was thinking about, I'm like, you know, this is still so hacky. At the end of the day, you should just have character. should say, oh, Sam said this to John.

And the model should like do your words or it should just know, Sam is this guy. This is his voice. He just says it. You I don't need to set all these parameters. Cause right now with Klingon, there's an O1 model and you have to do this thing called at. So you do at image underscore one. Use that to create at video underscore one. All these like weird aliases you have to do to tie, you know, create in the prompt. And you can do basically an at symbol. Cause you create an alias for the

Pierson Marks (37:45)
in the prompt.

Bilal Tahir (37:51)
the different assets basically.

Pierson Marks (37:54)
Right. mean, this is what I think is like the models are not gonna... Well, the models will get better at this, like natively. But I think that a lot of these things will come at the...

the harness or the application layer above it. The model's going to have some limitations just because it's going to be hard to do. It's just not going to make sense to do it at the model level. But if you can build the application around it where you're still hitting the API, and then you just let the deterministic code figure a lot of this stuff out, and it looks like, oh, the model produces output that was consistent and everything. But it's actually, it was a model plus,

an application, like you'll see a lot more of this. Like the v0 models, I was reading into how v0, Vercelles, design agents, how they work. And there's a model under the hood that writes the code, but then while it's streaming tokens back to the user when it was like creating a UI, it's doing a lot of cool stuff and like interpreting that stream so it can fix syntax errors without actually having to go back to the model and say, hey, this was wrong. So for example,

they would string down like this, this is an icon library called Lucid that we use, everybody kind of uses in the industry and that icon library changes very frequently, but the models don't know about the newer icons often. And so what they do is that the v0 generates the code. They have the syntax tree of the most recent Lucid icons and all the exported ones. And then it could actually detect while it's streaming down the tokens like, and do like replacement inline replacement.

Bilal Tahir (39:19)
you

Pierson Marks (39:33)
And then like go back and fix the message prompts and say, they just like replace and do like these swapping in the actual messages. So next time on the next message, actually, the whole messages were fixed based on what's actually available in the most recent Lucid library, even though the model generated the wrong icon, it was fixed on the output. then, yeah, yeah, yeah. It real time review. It was really cool.

Bilal Tahir (39:56)
Wow, so it's almost like a real-time review that's going on. That's crazy. Yeah.

Pierson Marks (40:02)
So

the model just seemed like to the end user that the model is streaming down the right things always. You're like, how is it doing this? But it was a real time like review and replacement. So it's cool.

Bilal Tahir (40:12)
That's pretty cool, interesting. There's this, yeah. Yeah, it's crazy, 20.6 is gonna be so exciting. So many things to do.

Pierson Marks (40:17)
Yeah. So many things to

do. Well, yeah, I mean, there were some other things that we could talk about, but maybe we'll save them for next time.

Bilal Tahir (40:26)
Yeah, yeah, I think we covered a lot. mean, as always, just going everywhere, you know, all over. It's

Pierson Marks (40:30)
Everywhere

super fun. Well episode 26. That's a listen to episode 27 next week and we'll talk to everybody then

Bilal Tahir (40:39)
All right,

take care guys.

2026 Predictions & LTX-2 4K Open Source Video Model
Broadcast by