TVNweather.com | Tornado Chasers

What Does 4K Resolution Technology Mean For Future Storm Chasing?


#1

How many of you have been to your local electronics store to see the new 4K resolution TV? It’s amazing isn’t it? The graphics are so clear and the details are so great that you can see literally everything as if it was real life. Even though 4K TVs are around 7 to $10,000 right now, what does this mean for the future whenever 4K TVs become more affordable?

In the media world the programs that are shown on TV are only as good as the equipment that take the shots and the TV that it is used to watch the program. 4K resolution is just around the corner when it comes to watching TV programs. Do you see yourself recording tornadoes and severe weather events in 4K resolution in the future? What are your thoughts?


#2

This is a really interesting question, and it’s one that I know a lot of storm videographers and producers are pondering right now. The TV industry is really pushing 4K because they want us all to purchase new TVs.

One thing I noticed is that the in-store 4K demo TVs are mostly showing 4K video that’s running at 60 progressive frames per second (or 60p). I think one of the reasons the video looks so lifelike is because it’s running at 60p (as opposed to it being 4K). Most current full HD content runs at 24p, 30p or 60i, so the extreme smoothness of 60p is new and jaw-dropping to most people. But it’s important to remember that 60p isn’t the same thing as 4K (current 1080p TVs can display 60p as well).

As far as the 4K (or UHD) resolution itself, I don’t think that’s as important right now if you’re wanting to shoot content for broadcast. Most networks just finished the costly upgrade to HD a few years ago, and you’ve probably noticed how badly compressed a lot of current TV content is (especially blocky artifacts). If current HD is a strain for networks, I can only imagine how challenging 4K will be. You probably won’t notice the increased resolution on TVs as much as on computer monitors or other screens that are closer to your eyes.

Scientifically, I think that the increased resolution of 4K is great for observing tiny details (objects on the ground, debris, etc). Until everyone gets 4K monitors, though, you’ll need to crop and zoom so people can see these details on a 1080p screen.

Another important thing to remember is that it’s really early for 4K, and 4K cameras will get better in the future. All things being equal (with money as no object), 4K is great. But if you’re on a budget for cameras, I think the focus should always be on the quality of the lens and imaging chip. A 1080p camera with a Super35 chip is almost always going to produce a better image than a 4K camera with a 1/3" chip.

What do you guys think? Are you taking the 4K plunge?


#3

Thanks for the addition ken, that really helped me to get some insight on how 4k and 60p are different when it comes to the graphics. I have done a little homework on the 4K. I don’t think my findings makes MUCH difference, but I was reading an article on Wikipedia that explained that 4k that is reduced to 2k in the video finalization can show increased contrast and fineness as opposed to a regular 2k camera shot. I will quote the article here and provide a link.

“The main advantage of recording video at the 4K standard is that fine spatial detail is resolved well.[39] This contrasts with 2K resolutions in which fine detail in hair is displayed poorly.[citation needed] If the final video quality is reduced to 2K from a 4K recording more detail is apparent than would have been achieved from a 2K recording.[39] Increased fineness and contrast is then possible with output to DVD and Blu-ray.[40] Some cinematographers choose to record at 4K when using the Super 35 film format to offset any resolution loss which may occur during video processing.[41]” (Wikipedia)

http://en.m.wikipedia.org/wiki/4K_resolution

I trust you know WAY more than I do, and what you are saying makes plenty of sense. 4K is really overkill right now. However, with the benefit of getting better fineness and contrast when we reduce the final video quality, would it be ideal to invest in the 4K camera for today’s 1080p use, and then for tomorrow’s 4K awesomeness? I see where there would be its trade off, but I can guarantee that better 4k cameras will be made, which makes investing in one pretty much moot at this time.


#4

4K is AWESOME.

Currently, I use a GoPro that will record 4K at 30fps and a Samsung 4K monitor to edit the images. As you would think with having four times the information captured/displayed as standard 1080 HD, the quality is absolutely stunning. The only problem that I’ve found with it is the file size. 4K files are HUGE! 2 minutes is roughly a gb of data. The higher cost of having SD cards with read/write speeds to accommodate the immense amount of info isn’t a deal breaker to me -you can always find one on sale at Amazon or Best Buy for a good price. You’ll have to budget about a dollar/gb for the card. I use a Samsung Pro 32gb that goes for $30. I have tried other cheaper cards but they’ll error out if you try to record in 4K or anything above 48fps in 1080.

Camera wise, Samsung is doing some cool things in the world of consumer 4K recording. Two of their current cameras; the NX1 ($1500 body only) and NX500 ($800 with kit lens) will record 4K directly onto a SD card. They use a new codec (H.264) instead of the widely used H.265 which supposedly cuts the file size down to almost 1/10 of the latter’s. As of now, editing that H.264 is a giant pain in butt due to the lack of support for the file type, but it is “the future” of 4K files and should develop a better ecosystem sooner than later.

With the up-converting processing that is built into the TV’s I wouldn’t be too worried about content; Netflix already streams in 4K. I’ve watched a fair amount of non-4K media on the tv’s and it is definitely an upgrade. Truthfully, you’ll know instantly if it was something shot in 4K or up-converted, but it’s a start.

Hopefully we’ll start to see some documentaries of weather shot in 4K by someone with a bigger budget so we can truly see the difference. I imagine once we do, there’s no going back!

I can’t wait to get recording this storm season in 4K!!


#5

One issue with 4K streaming is that services tend to highly compress the video to lower the bitrate so it reduces bandwidth requirements, which cuts costs and lowers connection requirements for those watching. The discussion around 4K always seems to focus a lot on resolution (number of pixels) but the fact is that resolution is not the same thing as quality. Depending on the complexity of the content, reducing bitrate does reduce quality even though the resolution remains the same.

It does seem like average connection speeds in the United States are at a point where they can handle 4K streaming, which is probably somewhere around 25-30Mbps and greater (assuming content is encoded in H.264). I would also imagine a wide range of 4K bitrates depending on the service, paid services like Netflix are probably going to be on the higher end while free services like YouTube and Vimeo are going to be highly compressed, possibly to the point of no longer really qualifying as ideal 4K bitrates.


#6

You have your video codecs backwards. H.264 is widely used right now, while H.265 is relatively new. You can definitely encode video in H.265 right now but there is a limited range of devices available which can decode it compared to how widely adopted H.264 decoders are.

H.265 also requires more processing power on the decoding side, this means that a lot of these connected devices like Roku and Apple TV are not going to be able to simply receive a firmware update to handle it, you’ll have to buy a new device with more capable hardware.

H.265 is definitely an improvement in terms of how well the algorithm compresses video but the gains seem to be somewhere in the 40-50% range depending on the content and quality settings. Still, a 40-50% reduction in bandwidth required for streaming content would be a welcomed improvement but I think widespread adoption is still a ways off.


#7

It could be more feasible to introduce videos to our community with 4K resolution and good quality graphics ( hopefully without the compression) if we use a downloadable file instead of streaming. This would allow for consumers to avoid the frustration of buffering with limited services, while still getting the resolution they want. However I only foresee this being used on laptops and desktop computers that can handle these kind of graphics qualities. However now with downloadable files we need to be careful of for security reasons. We don’t want consumers to freely distribute files that we use to monetize within the TVN company. That would require us to actually program and create a client that decodes the encrypted video files for consumers to watch. It all sounds good in my head, but I am not well-versed in how that would work if it works at all.

This is extremely good information and I look forward to seeing more posts!


#8

We’ve never been worried about downloads, we offered DRM-free downloads of every episode of Tornado Chasers. We believe customers should not be treated like potential criminals and that they deserve the right to copy their content to all their devices and make as many backups as they wish with no limits on their own personal use of the content.

Unfortunately the most of the entertainment industry doesn’t have the same point of view, they want DRM so they can leverage control over the paying customers. They want to be able to revoke your access to the content (as Amazon has done) or limit how many devices you put it on so that you’ll have to pay for it again in the future.

DRM or not, downloads do help solve the issue of throughput required for streaming but you are right that their use doesn’t fit with every device connected to a display. Not everyone has a Home Theater PC they can just fire up an MP4 up on and play it back on their TV and you can’t download a video file to your Roku. However, newer technologies like Google Chrome Cast and Apple AirPlay (via Apple TV) do make it much easier to take a video file and stream it to a display over the same wifi network.


#9

You bet, Ace. As far as the benefit of reducing 4K to 1080p, yes you’ll get a better 1080p picture concerning detail, contrast, and color resolution (than with a comparable 2K sensor). However, I’d always look at the quality of the imaging sensor first.

As videographers, we ideally want the imaging sensor to be as large as possible – this means we want each photosite is as large and as sensitive to light as possible. Because these are the early days of 4K, we’re seeing a lot of companies putting 1/2.3" 4K smartphone imaging sensors in prosumer cameras. That means they’re squeezing 8.8 effective megapixels onto a 1/2.3" chip. These sensors can get decent images as long as there’s a lot of daylight, but they’ll get noisy when shooting at dusk/night or while shooting indoors.

Contrast that with a 1.1" Super35 chip with the same effective 8.8 megapixels. Each photosite will be much larger and much more sensitive to light. You’ll always get a superior picture, including in extreme low-light conditions. And frankly, if we’re shooting tornadoes, a decent chunk of them will occur in lower-light conditions.

The reason I’m pointing this out is that right now if you have $4K - $5K to spend on a camera, you could be looking at either a 4K [Sony PXW-Z100][1] ($5500) or a 1080p [Canon C100][2] ($4000 without lens). The Sony is 4K but uses a 1/2.3" chip (size used in smartphones), and won’t be great in lower light. The Canon is 1080p, less expensive (even after purchasing a lens), but uses a Super35 cinema sensor, captures the image in 4K then downconverts in-camera before recording to 1080p.

So basically you’re choosing between an okay 4K image vs. a superior 1080p image. If you’re going to be converting the 4K Sony down to 1080p anyway, why not just save some money and get the Canon for the superior image?

Obviously, there are many factors when looking at a camera (including ergonomics, ease-of-use, data intensiveness), and a good videographer can make a camera at any price work. You just need to learn your camera and its sweet spot.

It’s just we all get so excited when new technology appears. Before you purchase, remember that 4K technology will improve and 4K cameras releasing in a year or two may show drastic improvement in image quality. If you’re okay taking that risk as an early adopter, I’d say go for it.
[1]: http://www.bhphotovideo.com/c/product/1004182-REG/sony_pxw_z100_4k_handheld_xdcam_camcorder.html
[2]: http://www.bhphotovideo.com/c/product/889545-REG/Canon_EOS_C100_EF_Cinema.html


#10

Awesome info

What is the 1.1" Super35 chip comparable to in regards to image sensor size? Is it a “full frame” sensor? If I understand correctly the photosite is the size of each individual light sensitive element on the sensor?

How do either of these compare to a CMOS sensor? I am strongly considering purchasing the Samsung NX1 prosumer mirrorless camera; hopefully getting good to almost great bang for the buck at $1500.


#11

Super35 is smaller than DSLR full-frame sensors, closer to APS-C size, and you’re right about the photosite. Most sensors these days are a type of CMOS. If you’re interested in a camera, always read multiple expert reviews and try downloading some native sample footage to inspect for yourself. Every camera will have its own strengths, weaknesses and sweet spots. Make sure the camera fits your needs. Also compare against the competition in the same price range. After all that, If you like the camera’s images, go for it.