Hello Interview – SWE Interview Preparation
00:00 – Intro
01:37 – The Approach
3:37 – Requirements
8:05 – API & Core Entities
11:41 – High Level Design
20:16- Deep Dives
42:07 – Conclusion
A step-by-step breakdown of the system design interview question, Design YouTube.
Evan, a former Meta Staff Engineer and current co-founder of Hello Interview, walks through the problem from the perspective of an interviewer.
Connect with me on LinkedIn:
Evan: https://www.linkedin.com/in/evan-king-40072280/
Resources:
1. Practice with our new Guided Practice: https://www.hellointerview.com/practice
2. Detailed write up of the problem: https://www.hellointerview.com/learn/system-design/answer-keys/youtube
3. System Design In a Hurry: https://www.hellointerview.com/learn/system-design/in-a-hurry/introduction
4. Excalidraw used in the video: https://link.excalidraw.com/l/56zGeHiLyKZ/4OfV5tY4yBk
5. Vote for the question you want us to do next: https://www.hellointerview.com/learn/system-design/in-a-hurry/vote
Preparing for your upcoming interviews and want to practice with top FAANG interviewers like Evan? Book a mock interview at www.hellointerview.com.
Good luck with your upcoming interviews!
Source
If we start storing the manifest to the S3 and over there we have the s3 urls for chunked data, then shouldn't we store the Manifest URL in the video metadata table rather than the S3 URL?
Small pushback here. Chunking protocols such as HLS/DASH are not strictly necessary for low latency watching since S3 natively supports range requests which effectively allows you to "chunk" and render the original file using regular HTTP requests.
Why we need to list all chunks S3 URLs in array, if we can only know how many chunks video have and assume that each url just will end with chunk number? e.g. s3://bucket/movie/codec/quality/chunk_number.mp4
Is S3 cheaper than FirebaseStorage?
Hi Evan,Thanks it was a great video, Please make a followup on this considering the interviewer adds a requirement such as search and Filter for video search, or on High availability / Multi-region, / adding live streaming support.
Might make sense to have async ordered queues/kafka between chunker and transcoder to asynchronously transcode and store it in s3
I was also a tech lead on the video team, and I just want to make a quick remark here: actually, the video processing goes in the opposite direction. First, one should transcode the whole video source file, and after that, chunk it. Actually, there are no chunks either, as we can store all the video fragments as one piece and get the chunks on demand according to the client’s video request — there are many different formats and transport protocols. So yeah, there are many deep technical details.
When you said that the video is uploaded in Germany so if the U.S Users cant access it for a couple hours its okay. So that means we are prioritizing availability over consistency. Isnt that hurting availability if the U.S Users cant use it for a couple hours? Ah wait a second so I think that it wouldnt be hurting availability because availability refers to whether the system accepts the request or not. It would be hurting consistency because the nodes in the U.S wouldnt have the same replica of data.
what app you use do draw and design HLD?
Best video on the youtube on Youtube system design OR best system design video I have ever seen as Engineering leader
please cover how to maintain state – say user want to resume watching ?
An amazingly simple way to explain a complex design interview question. Watching these for my Atlassian P40 interview. Fingers crossed!
for the algo
Great video, I think one thing that I believe could be important to improve user experience when they upload the video, we can have notifications in the system, and once the video is uploaded successfully we can notify the them, users dont have to wait while uploading file of 256GB.
This video is fantastic, and very intuitive. Easily able to understand then entire video streaming. Thanks Evan.
I don't understand how availability is satisfied. because yes async process to chunk video in S3 does deprioritize consistency but doesn't answer how to guarantee availability. a little bit confused here. any idea? thanks
😍 the video, at the end i would prefer a complete quicky rather than start from s3.
I was asked same exact question in my amazon SDE 2 interview and I cracked the round and ended up getting offer. Thanks a lot hello interview
Great system design video! In my opinion , if browser is a client than it can not directly call to s3. First request should come to video service than it’s ok to call s3 for video upload on behalf of client
Best HLD explanation ever, Thanks Evan!
1. Is the s3 notifications – able to update the DB, shouldn't there be an intemidiate service for the DB updates?
2. Why do we want to save both the full video and the chunks?
Thank you.
thanks!! Very useful video
the closing note on grind is all we need
As an SDE with 3.5 yrs experience, this video as absolute CRAZY !!!
BESTTTTT one
A chunking service splits a large video in smaller chunks and generate multiple quality video versions. this allows the video stream efficiently , even on low bandwidth network.
Before the user can publish the video from web or mobile app. the system must ensure that:
– all chunks of video file uploaded.
– all quality variants are generated.
How to handle or handling in current design
This is a great video and I really love the content, I just started to learn deeper on system design, but would you like to explain why are you putting API first rather than the high level design? thanks
Important: the API Gateway must also scale horizontally. Otherwise, it becomes a single point of failure. In interviews, either draw multiple gateway instances behind a load balancer or clearly state that it runs as replicated nodes.
Routing can be done via Round Robin, Least Connections, Weighted Round Robin, IP Hash (for stickiness), or latency-based routing depending on the use case.
Overall, great system design video, I really enjoyed it. Thanks for sharing.
Thank you for doing this
S3 is a real one
GOAT
The last 3 minutes is what I TRULY needed — thank you good sir! World needs more of this. Keep up the Lord's work!
When s3 sends a notification after upolading to video service, how does video service know this notification came for which video? Does S3 send any video Id that video service uses to update the db
@28:30, instead of storing all the chunks at different resolutions can you just have a converter that takes an original 4k chunk and converts it to a 240p or whatever before returning it back to the client?