Industry | CineD https://www.cined.com/industry-insights/ Fri, 22 Mar 2024 10:34:28 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 “Civil War” Feature Film by Alex Garland Shot on the DJI Ronin 4D https://www.cined.com/civil-war-feature-film-by-alex-garland-shot-on-the-dji-ronin-4d/ https://www.cined.com/civil-war-feature-film-by-alex-garland-shot-on-the-dji-ronin-4d/#comments Fri, 22 Mar 2024 10:32:32 +0000 https://www.cined.com/?p=331327 During the SXSW 2024 annual conglomerate in Austin, Texas, director and screenwriter Alex Garland showcased the upcoming “Civil War” feature film and revealed that it was shot on the DJI Ronin 4D. Are you also curious to learn more about it? So, let’s dive straight into it!

The DJI Ronin 4D-6K was released in October 2021here’s our full video review in case you missed it – and it took the company an extra two years to finally launch the Ronin 4D-8K with the Zenmuse X9-8K camera. While this one-of-its-kind filmmaking device is impressive and can produce unique results thanks to its 4-axis stabilization, the Ronin 4D never really made it to Hollywood and feature films. Indeed, to this day, except for short films, commercials, music videos, and documentaries, the Ronin 4D is struggling to make it on large screens.

During South by Southwest (SXSW) 2024, screenwriter and director Alex Garland presented his new movie, “Civil War,” which stars Kirsten Dunst, Wagner Moura, Cailee Spaeny, Stephen McKinley Henderson, and Nick Offerman. Before diving deeper, let’s watch the movie’s official teaser, which will be released on April 12th, 2024.

Civil War – Shot on the DJI Ronin 4D

You got it; the story behind “Civil War” is easy to summarize: the movie follows a team of journalists who travel across the United States during the rapidly escalating second American Civil War. This anticipation/SF movie received good criticism during the SXSW world premiere.

In an interview with Empire, director Alex Garland revealed that they shot “Civil War” with the DJI Ronin 4D:

It does something incredibly useful. It self-stabilises, to a level that you control — from silky-smooth to vérité shaky-cam. To me, that is revolutionary in the same way that Steadicam was once revolutionary. It’s a beautiful tool.

Alex Garland

Alex Garland mentions that the camera was affordable at around $5000 – well, $6799 if we want to be precise – so we can deduce that they shot with the DJI Ronin 4D-6K. Since the movie will be available in theaters and IMAX, it tells us that Ronin 4D footage can go the extra mile perfectly.

DJI Ronin 4D 6K during our Lab Test
DJI Ronin 4D 6K during our Lab Test. Image Credit: CineD

Why choose the DJI Ronin 4D to shoot Civil War?

Every tool has its pros and cons, but Alex Garland found that the DJI Ronin 4D was the best tool for the job of shooting Civil War:

We knew we needed to shoot quickly, and move the camera quickly, and wanted something truthful in the camera behaviour, that would not over-stylise the war imagery. All of which push you towards handheld. But we didn’t want it to feel too handheld, because the movie needed at times a dreamlike or lyrical quality, which pushes you towards tracks and dollies.

The final part of the filmmaking puzzle — because the small size and self-stabilisation means that the camera behaves weirdly like the human head. It sees “like us.” That gave Rob (Rob Hardy B.S.C NDLR) and I the ability to capture action, combat, and drama in a way that, when needed, gave an extra quality of being there.

Alex Garland
DJI Ronin 4D Flex tether system. Image credit: DJI

The main selling points of the DJI Ronin 4D for Garland have been the flexibility and built-in 4-axis stabilization. Indeed, time is money on set, so the accumulated saved time on installing a dolly over several weeks can be huge at the end of the day. It also means that the team was faster in setting up and following the action, which can benefit the acting.

Will it be the beginning of a new trend and the start of more movies shot with the DJI Ronin 4D? Only the future will tell, but as Garland says, it is “not right for every movie, but uniquely right for some.”

Source: Empire

featured image credit: A24 / DJI (composition by CineD)

Did you already shoot content with the Ronin 4D-6K or 8K? Do you see yourself shooting entire projects with the Ronin 4D? Don’t hesitate to let us know in the comments below!

]]>
https://www.cined.com/civil-war-feature-film-by-alex-garland-shot-on-the-dji-ronin-4d/feed/ 5
Poll: Director of Photography, or a Cameraman/Woman – How Would You Describe Yourself? https://www.cined.com/poll-director-of-photography-or-a-cameraman-woman-how-would-you-describe-yourself/ https://www.cined.com/poll-director-of-photography-or-a-cameraman-woman-how-would-you-describe-yourself/#comments Thu, 21 Mar 2024 13:12:07 +0000 https://www.cined.com/?p=331381 In this week’s poll, we are very interested in finding out how you would describe yourself. Are you a Director of Photography/Cinematographer or a cameraman/woman? Although this looks like a simple question, please take a moment to describe yourself honestly.

Times are changing and what used to represent a “clear job description hierarchy” is no more. Director of Photography/Cinematographer used to be a title given to those who work on set, working closely with a director, and at the same time “managing” the surrounding creative workforce. We were “innocent enough” to believe that a “Cinematographer” would work on films for Cinema (as the name hints), but boy, it looks as if we were wrong. Currently, there are many “cinematographers” out there who have not shot a single frame for cinema. So why is this happening? Are people looking for a shortcut to gain recognition? Alternatively, let’s phrase it this way: Can a particular title enhance your self-promotion? One thing is for sure. The title “Cinematographer” sounds much more convincing than “Youtubegrapher”. And let us be clear here, we are producing a lot of content for YouTube as well, and know how much effort it takes to produce it.

One of the issues here is that everyone (and his mother) can call themselves whatever they like, as there is no “unified certification or standard”. Is this good or bad? Well, as always, it depends on who you are asking.

By the way, the same goes for being titled as a cameraman/woman, as this seems to be an extinct profession (because everyone is a DoP now)…

A cameraman/woman (or a lighting cameraman) used to be a respected profession – one that allowed the power of storytelling by understanding the equipment and, of course, the lights you were working with. In the old days of film, this was even more significant. However, the shift to digital and the capability to “instantly see what you get” in the viewfinder, coupled with the ability to playback and review results, marked the beginning of the democratization of the profession, and the rest is history.

So who are you? Are you calling yourself a Director of Photography/ Cinematographer BECAUSE you are working on cinema sets, or, are you using this title regardless of what you film and where your project will be shown? Or, are you a cameraman/woman who is happy to hold a camera and create beautiful images sans the desire to work in Hollywood or Bollywood? Moreover, are you true to yourself with the title you are using?

This poll uses Javascript. If you cannot see the poll above, please disable “Enhanced/Strict Tracking Protection” in your browser settings.

We would love to hear your thoughts about this topic so please be so kind and share it with us by voting in our poll, or better yet, leave a comment below.

]]>
https://www.cined.com/poll-director-of-photography-or-a-cameraman-woman-how-would-you-describe-yourself/feed/ 1
CineD Best-of-Show Award at NAB 2024 – Submissions Now Open for Manufacturers https://www.cined.com/cined-best-of-show-award-at-nab-2024-submissions-now-open-for-manufacturers/ https://www.cined.com/cined-best-of-show-award-at-nab-2024-submissions-now-open-for-manufacturers/#comments Wed, 20 Mar 2024 13:56:32 +0000 https://www.cined.com/?p=330870 With NAB 2024 just around the corner, our CineD team is gearing up for our largest editorial presence ever. As in previous years at both NAB and IBC, we will award several CineD Best-of-Show Awards once again at NAB 2024. For the first time ever, we invite manufacturers to submit their product innovations ahead of the show. This ensures we don’t miss anything amidst the craziness of the show. Additionally, we’ve redesigned the Awards trophy from the ground up. Read on to learn more!

In less than a month, the 2024 NAB Show will kick off in Las Vegas, and manufacturers around the world are already gearing up for the industry’s largest trade show in the world. CineD will be there with an even bigger crew, producing our usual video content. However, this year, our presence will be more extensive than ever before. We’ll be covering more technology news and products not only for YouTube but also directly for social media (Instagram, YouTube Shorts, TikTok). 

Submissions of products for consideration at the CineD Best-of-Show Awards at NAB 2024 are open now

In previous years, our Best-of-Show Award winners at NAB and IBC were chosen from the products we covered during the shows. If you missed them, you can check out the announcements for NAB 2023 and IBC 2023.

Beginning this year, we’re changing our approach. We’re now inviting manufacturers to submit their products in the month leading up to NAB. This way, we get a broader overview of relevant innovations and products that deserve our attention.

Seven categories for CineD Best-of-Show Awards at NAB 2024

CineD is accepting submissions to compete for the CineD Best-of-Show Award at NAB 2024 in seven categories:

  • Cameras
  • Camera Support, Control, and Accessories
  • Audio Equipment
  • Lighting Equipment
  • Lenses
  • AI Innovation
  • Streaming, Remote Production & Cloud Workflows

Newly designed Awards Trophy

Redesigned from the ground up, we’re thrilled to present our new CineD Best-of-Show Awards Trophy. Each winner in every category will receive this trophy in person at the 2024 NAB Show.

The new CineD Best-of-Show Trophy, to be handed out to winners for the first time ever at NAB 2024. Image credit: CineD

Submission process

To submit your product(s) or technology and be considered for a CineD Best-of-Show Award at NAB 2024, please head over to our entry form at Zealous which has all the details.

–> ENTER HERE.

For our full Terms & Conditions regarding entering this submission, please read this. Please note, there’s a small nomination fee for each entry, and there’s no restriction on the number of entries per manufacturer.

If you want to submit your product or technology that hasn’t been announced or released prior to NAB, we are happy to sign an NDA agreement before your submission. Simply send us the NDA via email, and we will return the signed form in due time.

Former CineD Best-of-Show Award Winners

Former CineD Best-of-Show Award winners include ARRI, Sony, Blackmagic Design, LC-Tec, DJI, Zhiyun, frame.io, FUJIFILM and many, many others. Please note that these winners received our former award design. If you want to read more about previous winners and our reasoning why we selected them as winners, head over to our winners announcement article from NAB 2023 and IBC 2023.

Any questions?

In case you have any questions about the process, please get in touch with us and we will get back to you as soon as possible.

]]>
https://www.cined.com/cined-best-of-show-award-at-nab-2024-submissions-now-open-for-manufacturers/feed/ 4
Canon’s 2024 Strategy – Interesting Hints and Speculation https://www.cined.com/canons-2024-strategy-interesting-hints-and-speculation/ https://www.cined.com/canons-2024-strategy-interesting-hints-and-speculation/#comments Wed, 20 Mar 2024 08:29:22 +0000 https://www.cined.com/?p=330236 Canon celebrates their 21st year as the world’s leading interchangeable camera systems manufacturer. Their 2024 imaging group strategy seems to point to some interesting trends and shifts, which include an attempt to establish an absolute position in the mirrorless market. Canon also notes a shift toward the experience of the audio-visual content consumer. The company will tackle these challenges, as well as efficiency and profitability challenges, with various methods and practices.

Canon boasts an established reputation, as they are no stranger to innovation and technological progress. The Company has maintained their place among the leading patent applicants in the USA for over a decade. Several important innovations gained Canon prominence in the photo-video industry, and the venerable EF mount is fundamental for many of them. The mount, launched in 1987, completely replaced its FD predecessor. Offering fully electronic camera-lens communication, it launched Canon’s system to their top position and they’ve maintained it ever since.

Canon EF & RF Lenses - Overview
Canon current EF & RF Lenses. Image credit: Canon

The inclusion of fully electronic communication and a focus motor in every lens made EF lenses relatively easy to fully adapt to other systems. Almost every modern mirrorless mount has an EF adapter, and most include autofocus and other advanced features. However, as long and interesting as Canon’s history may be, this article is about their future. So, what does Canon have in store for us?

Absolute position in the mirrorless market

While objective stats are hard to come by (and there is more than one way to measure them) Canon’s grip on the interchangeable lens camera market is firm, with about half of total sales attributed to it. Oceans rise, empires fall, but it seems Canon manages to remain on top of things. Still – the mirrorless segment poses a challenge, and Canon is opting to reinforce their control over it. According to their recent strategy document, the company will try to broaden the video-oriented crowd in both the social media content creator segment, as well as with “traditional” video professionals. As the company mentions “experience” as one of their top goals, and notes some of their more unique designs like the PowerShot V10, we may expect some interesting designs in the future.

Canon professional support

Canon’s 2024 strategy acknowledges the importance of continuous professional support. Canon is a strong performer in the professional segment. Some indication of that claim emerges from rental figures regularly published by Lensrentals. Though these figures are far from representing the entire market, they provide some quantitive indication. The professional market may not be as vast as the consumer market, but it holds secondary advantages regarding brand-based marketing. This will go hand in hand with the company’s continuous professional service and support.

Canon’s take on “experience”

Canon may not be the first manufacturer to offer a 3D-enabled interchangeable lens option. They do, however, offer the RF 5.2mm f/2.8 L Dual Fisheye 3D VR Lens, which is probably the most solid option for an interchangeable 360 VR kit around. This lens is rather niche in terms of mainstream interchangeable systems, but it’s far from being the company’s only entry into the world of viewing experience, and definitely not the most extreme.

Mixed reality

Canon mentioned the viewing experience as one of the major future shifts in the industry. As such, they made some strides in this regard. Canon’s MREAL X1 is a mixed-reality set. Mixed reality seems to be very similar to augmented reality (AR) as it combines the input coming from an internal camera array and virtual input to instill a sense of presence in virtual objects.

Canon MREAL X1 Mixed Reality headset. Image credit: Canon

The MREAL X1 is aimed mostly at industrial applications and currently lacks the finesse of other AR/VR sets. The view, as seen in the sample video, isn’t as smooth and can’t provide the same experience that recent competitors can. This is due to its different target audience that will probably value efficiency over seamlessness. As Canon’s officials recently claimed, current entries like the Apple Vision Pro require more resolution than any camera can provide. The MREAL X1 is a more “down to earth” solution.

Volumetric Video

Perhaps the most interesting prospect of Canon’s journey lies in their Volumetric Video. Volumetric Video is a method of motion capture incorporating a large number of synchronized cameras to both film and 3D map the scenario in real time. The outcome is a video-game-esque 3D environment depicting actual events.

Volumetric Video is extremely demanding in terms of hardware, software, and infrastructure. Such a system requires many cameras, a synchronized control, and exorbitant data throughput. Yet, the prospect of watching your next soccer, basketball, or football match with the ability to wander around, following your favorite player’s field of view, taking a bird’s-eye view, and then diving deep into the fray, is quite exciting.

Canon Volumetric Video infrastructure scheme. Image credit: Canon

Tradition of innovation

For most current creators, Canon is a “constant”. It was always there – a brand synonymous with image-making. This was achieved by continuous innovation. Canon was there during (and leading) the autofocus revolution. They brought market-leading cameras into the digital revolution and merged victorious. The company launched a mirrorless system on June 2012 but was still a bit late to the professional mirrorless turn of events. Once they did enter the market, they quickly harnessed their innovative prowess to churn out various lenses and cameras, now covering most niches and genres. Like it or not, Canon is among the most influential players in this game, and their strategy will probably affect us all in some way.

What do you think the future holds for motion capture and content consumption? Is Canon on the right track here, or is it a “Kodak moment”, when a major manufacturer strays from the needs of its audience? Let us know in the comments.

]]>
https://www.cined.com/canons-2024-strategy-interesting-hints-and-speculation/feed/ 2
YouTube to Require AI Labeling by Creators https://www.cined.com/youtube-to-require-ai-labeling-by-creators/ https://www.cined.com/youtube-to-require-ai-labeling-by-creators/#respond Tue, 19 Mar 2024 18:49:33 +0000 https://www.cined.com/?p=331224 AI-generated content has been on the rise in recent years. As it gains popularity and becomes more accessible, concerns rise. Following their announcement last November regarding responsible AI, YouTube now incorporates a new tool into YouTube Studio. This new feature will enable creators to disclose the less apparent AI use cases, such as voice alteration, face swap, or any AI-generated, realistic-looking scenes. As of now, it’s purely voluntary, based on the creator’s good faith.

As AI-generated content is more widespread than ever, authenticity concerns grow. YouTube now offers a new tool to mitigate some suspicions regarding content. The new labeling tool is voluntary, providing the creators’ community with a chance to build much-requested trust with their audience.

AI altered content tag tool on YouTube Studio. Image credit: YouTube

Authenticity is the key

With this new tool, YouTube aims to combat disinformation spread by manipulative use of AI tools. You’ll still be able to post your image riding a dragon or create fantastic landscapes. As long as the end image is deliberately unrealistic, there won’t be any problem in posting it; no AI labeling is needed. YouTube specifies the following use cases in which AI labeling is due:

  • Using the likeness of a realistic person: Digitally altering content to replace the face of one individual with another’s or synthetically generating a person’s voice to narrate a video.
  • Altering footage of real events or places: Such as making it appear as if a real building caught fire, or altering a real cityscape to make it appear different than it does in reality.
  • Generating realistic scenes: Showing a realistic depiction of fictional major events, like a tornado moving toward a real town.
AI altered content tag as seen in YouTube Shorts. Image credit: YouTube

Full coverage isn’t quite there yet

While we should commend YouTube’s move, there are still some major caveats. YouTube specification of AI alterations that won’t require tagging leaves some room for interpretation. “We also won’t require creators to disclose when synthetic media is unrealistic and/or the changes are inconsequential.” Although YouTube continues specifying some use cases, the term “unrealistic” still seems rather subjective. More than this, it’s the voluntary nature of this tool that may be its undoing.

The voluntary dilemma

Most creators will surely be decent and honest regarding the authenticity of their content. The importance of audience-creator trust can’t be overstated. It’s a small percentage of malevolent users that I’m worried about. This system still provides no solution for this, and, to be honest, I’m not sure there is another way to combat it at this point. The current level of AI-generated content will make any algorithm-based solution pretty difficult to achieve, and the consequences of automatic tagging may also pose a problem. A solution may lie in some sort of electronic watermarking, like C2PA, but it requires much more than a technological solution, as all social dilemmas do.

Do you believe such steps may help to combat disinformation and fake news, or are they no more than lip service? Let us know in the comments.

]]>
https://www.cined.com/youtube-to-require-ai-labeling-by-creators/feed/ 0
Adobe’s Project Music GenAI Control Previewed – a New Generative AI Tool for Sound https://www.cined.com/adobe-previews-music-genai-generative-ai-tool-for-audio-and-music/ https://www.cined.com/adobe-previews-music-genai-generative-ai-tool-for-audio-and-music/#respond Tue, 19 Mar 2024 10:49:18 +0000 https://www.cined.com/?p=329502 Adobe Project Music GenAI Control is a new generative AI tool for custom music, audio creation, and editing. GenAI takes a more custom, selective, and controlled AI approach. This is very similar to Adobe’s Firefly, or the recently announced LTX Studio. Project Music GenAI Control can create music, adjust music fed into it in various ways and forms, lengthen a clip, make variations, change the mood or vibe, and more. The tool, or tool set is being previewed but is not yet integrated into any working application or software.

There’s no need to re-emphasize the importance of music, audio, and sound to the cinematic creation. It can be a complete feature score or a background of toned-down music manipulating the vibe of an interview. Sound will set the pace of a commercial video or the suspense in either a wildlife documentary or a horror film. In recent years we’ve witnessed various advancements in this field, but Adobe’s recent Project Music GenAI Control has some interesting tricks up its sleeve.

Main features

All features are theoretical at this stage of technical demonstration, but even this early in the process Project Music GenAI Control can pull up some impressive capabilities. One tool can take an input clip and enhance it in various ways. A simple text prompt will add an “inspiring film” vibe to it. Additional instruments will now accompany the initial feed, broadening the musical impression. If you’re into country music, Hip Hop, or R&B, Project Music GenAI Control will happily transform a clip into these genres. The system also provides you with control over the intensity of the newly formed tune.

Generated audio

As well as adapting and editing your original input, Project Music GenAI Control can generate complete tunes, loops, themes, and so on. Just type your prompt and you have your royalty-free clip. At that point, you can edit it as much as you want. Project Music GenAI Control can also add a portion of generated music to an existing piece. We’ve all gotten to this point where the music is a bit shorter than the video it supports. This solution, if implemented right, will solve issues like this in no time.

AdobeGenAI prompts. Image credit: Adobe

Real-world applications

While only previewed, it’s pretty easy to imagine the effect of such tools on the film and video industries. While complete movie scores will probably stay out of Adobe’s Project Music GenAI’s reach, at least for the near future, its effect will ripple through the industry. In my eyes, the impact will mostly affect editing efficiency across the board. It will let editors manipulate the pace, feel, and vibe of their creation with the press of a button. We’ll be able to create different variants of every video, and it’s going to be fast enough to be used as a mundane testing workflow. Background audio will become as easy as typing a sentence, with no royalties or credits required. We’ll be able to fine-tune a single tune to various audiences just by typing a text prompt. Intriguing times indeed.

AdobeGenAI at work. Image credit: Adobe

Ethics and credentials

Unlike some other players in the field of generative AI, Adobe takes extra care regarding the licensing, credentials, and ethics of their products. The company is among the founding members of the Coalition for Content Provenance and Authenticity (C2PA), uniting software giants, key broadcasters, and camera manufacturers to create sustainable authentication protocols ensuring a level of trust in visual information.

Adobe is committed to ensuring our technology is developed in line with our AI ethics principles of accountability, responsibility, and transparency. All content generated with Firefly automatically includes Content Credentials – which are “nutrition labels” for digital content that remain associated with content wherever it is used, published or stored.

Adobe
Video Credit: The Content Authenticity Initiative

Is it the next generation of AI?

AI generators have come a long way in the last couple of years, and things seem to have accelerated recently. One recent shift is towards specific control over the final product. Adobe Firefly is probably the most prominent example of this concept. The ability to transform specific selections in an image proves invaluable for many creators, including yours truly. Lightricks’ recent LTX studio poses another example and it seems Music GenAI follows that path regarding audio. Such a shift is extremely influential regarding the implementation of AI-based tools into a professional workflow. It is also a key component in democratizing more and more segments of the creative process, but not without significant pitfalls for other professionals, such as musicians who earn their livelihood off licensed music, etc.

Do you see yourself using such a tool in your day-to-day work? Will it create new opportunities for you? Let us know in the comments.

]]>
https://www.cined.com/adobe-previews-music-genai-generative-ai-tool-for-audio-and-music/feed/ 0
Is OpenAI’s Sora Trained on YouTube Videos? A Question of Ethics and Licensing https://www.cined.com/is-openais-sora-trained-on-youtube-videos-a-question-of-ethics-and-licensing/ https://www.cined.com/is-openais-sora-trained-on-youtube-videos-a-question-of-ethics-and-licensing/#comments Mon, 18 Mar 2024 13:22:36 +0000 https://www.cined.com/?p=331027 You probably didn’t miss last month’s announcement of OpenAI’s video generator Sora. It created quite a buzz, raising both excitement and sorrow, as well as a lot of questions within the filmmaking community. One of the pressing matters that always comes up when talking about generative AI is what data developers are using for model training. In a recent interview with The Wall Street Journal, OpenAI’s chief technology officer (CTO) Mira Murati didn’t want (or wasn’t able) to provide the answer to this question. She added that she wasn’t sure whether Sora was trained on YouTube videos or not. This raises the important question: What does this mean in terms of ethics and licensing? Let’s take a critical look together!

In case you did miss it: Sora is OpenAI’s text-to-video generator, which is allegedly capable of creating consistent, realistically-looking, and detailed video clips up to 60 seconds, based on simple text descriptions. It hasn’t been released to the public yet, but the published showcases have already sparked a heavy discussion on the possible outcome. One of the assumptions is that it might entirely replace stock footage. Another is that video creators will have a hard time getting camera gigs.

While personally, I’m skeptical that AI can completely take over creative and cinematography jobs, there is another question that concerns me a lot more. If they used, say, YouTube videos for model training, how on earth would they be legally allowed to roll out Sora for commercial purposes? What would this mean in terms of licensing?

Was Sora trained on YouTube Videos?

Ahead of the interview, Joanna Stern from The Wall Street Journal provided OpenAI with a bunch of text prompts that were used to generate video clips. In the discussion with OpenAi’s CTO Mira Murati, they analyzed the results in terms of Sora’s strong sides and current limitations. What also became Joanna’s point of interest, is how severely some of the output reminded her of well-known cartoons or films.

Did the model see any clips of „Ferdinand“ to know what a bull in a China shop should look like? Was it a fan of „Spongebob“?

Joanna Stern, a quote from The Wall Street Journal interview with Mira Murati

However, when their interview touched on the dataset Sora learns from, Murati suddenly backed up and started beating around the bush. She didn’t want to dive into the details, was “not sure”, whether YouTube, Facebook, or Instagram videos were used in Sora’s model training, and leaned on the safe answer, that “it was publicly available or licensed data” (which are two very different things to begin with!). You don’t need to be a body language expert, to see that OpenAI’s CTO didn’t feel comfortable answering these questions. (You can watch her reaction in the original video interview below, starting from 04:05).

Copyright challenges concerning generative AI

According to WSJ, after the interview, Mira Murati confirmed that Sora used content from Shutterstock, which OpenAI has a partnership with. However, it’s guaranteed not the only source of footage that developers fed into their deep-learning models.

If we take a closer look at Murati’s response, the copyright and attribution situation becomes even more critical. The wording “publicly available data” may indeed mean, that OpenAI’s Sora scrapes the entire Internet, including YouTube publications, and content on social media. The licensing terms on YouTube content, for instance, most certainly don’t allow for this usage of all the content hosted there.

Maintaining copyrights online is a challenging area on its own. I’m not a lawyer, but some things are common sense. For instance, if Searchlight Pictures publishes a trailer for “Poor Things” on YouTube, it doesn’t mean that I’m freely allowed to use clips from it in my commercial work (or even in my blog, without correct attribution). At the same time, OpenAI’s Sora will get access to it, and be able to use it for learning purposes, but also to profit from it, just like that.

How some companies react

The copyright (and licensing) problem with generative AI is not new. Over the past year, we’ve heard about an increasing number of lawsuits that big media companies like “The New York Times” and “Getty Images” filed against AI developers (particularly often, against OpenAI).

If you have ever used text-to-image generators, you’ve surely seen, how artificial intelligence adds weird-looking words to the created pictures. More often than not they distinctly remind of a stock image watermark or a company name, which signifies these AI companies don’t have rights for all the datasets they use.

Was OpenAI's Sora trained on YouTube Videos? - how image generators sometimes include random texts
An “abstract background” image, suddenly including a random text. Image source: generated with Midjourney for CineD

Unfortunately, there are no strict regulations in place yet, that would prevent AI developers from using materials online, and finding out and proving that a particular piece of data was used for training the model is close to impossible. Apart from issuing lawsuits, some companies have blocked OpenAI’s web crawler, so that it won’t be able to continue taking content from their websites, while others sign licensing agreements (one of the latest examples – Le Monde and Prisa Media, which will bring French and Spanish content to ChatGPT). But what do you do as an individual artist or video creator? This question stays open.

Not revealing datasets is a common issue for generative AI

It’s not just OpenAI’s CTO, who doesn’t want to talk about the datasets for Sora’s learning. The company generally hardly mentions the sources they use. Even in Sora’s technical paper, you can only find a vague note, that “training text-to-video generation systems requires a large amount of videos with corresponding text captions.”

The same problematic issue applies to other AI developers, especially to the ones, that call themselves “small”, “independent”, and/or “research”. For example, if you take a look at the website of famous image generator Midjourney and try to find information about the data on how they train their models, you are out of luck. Lack of transparency in this question can be the first sign these companies are trying to avoid legal problems due to the fact that they don’t have rights for the data they are using.

There are exceptions, of course. Thus, Adobe, launching their generative model Firefly, directly addressed the ethical question and published the information to the used datasets.

Was OpenAI's Sora trained on YouTube Videos? - a screenshot from Adobe's website about the dataset Firefly is trained on
Image source: Adobe Firefly’s webpage

However, their approach is still questionable. Were Adobe Stock contributors notified, that their footage would become the training field for AI? Did they give their consent? Does this fact increase their earnings? I doubt it.

What it means if Sora was trained on YouTube videos

So, as you can see, we have landed in a very messy situation with no clear solutions in sigh. During the same interview with the Wall Street Journal, Mira Murati mentioned, that Sora would be released to the public already this year. According to her, OpenAI aims to make the tool available at similar costs to their image generator DALL-E 3 (currently around $0.080 per image). However, if they don’t find a way to clarify their training data, or compensate filmmakers and video creators, things might get very tense for them. We predict that at least the big studios, production companies, and successful YouTube channels will bury OpenAI in copyright lawsuits if they don’t solve this by themselves, which might be hard to do.

And what do you think? How would you react, if OpenAI directly confirmed that they used YouTube videos and all published content, regardless of whom it belongs to? Is there any way, they can make things right, before they roll out Sora?

Feature image source: a screenshot from the video clip, generated by OpenAI’s Sora.

]]>
https://www.cined.com/is-openais-sora-trained-on-youtube-videos-a-question-of-ethics-and-licensing/feed/ 20
Libec Tripods and Camera Support Systems Crafted in Japan – Factory Tour https://www.cined.com/libec-tripods-and-camera-support-systems-crafted-in-japan-factory-tour/ https://www.cined.com/libec-tripods-and-camera-support-systems-crafted-in-japan-factory-tour/#comments Wed, 13 Mar 2024 16:20:28 +0000 https://www.cined.com/?p=330047 Libec might be known by many as a tripod and fluid head manufacturer. Yet, currently, the company is making a wide range of camera support products including pedestal systems, telescopic jib arms, remote heads, electronically-controlled products, dollies, tracking rails, slider systems, and more. We recently paid a visit to Libec’s new headquarters facility in Yashio City Japan, and were truly impressed with what we saw! Join us to learn more about this family-run business where we meet and talk with Koichi-san, the CEO and president of the company.

My team and I are continuing with our efforts to reveal the faces behind companies who are serving our industry, and this time, we headed to Yashio City which is not too far from Tokyo in Japan, to visit Libec.

At the new Libec factory. Credit: CineD

The company was founded approximately 55 years ago and recently opened their new facility in Tokyo. The headquarters and factory are located on the same ground where it was originally established by Koichi-san’s grandfather many moons ago. Like other companies in our industry, in recent years, Libec went through various changes and challenges, especially during the transition between the 2nd and 3rd generations, but now, after opening their new facility, Koichi-san is very focused on taking his company to the next level of Japanese craftsmanship, and will make sure that he is there to celebrate the company’s 100th anniversary (27 years more to go)!

A Libec tripos head cut into half
A Libec tripos head cut in half. Can you guess how many parts are inside? Credit: CineD

I hope that you will enjoy this factory tour story. A brand is “just a brand”, but the people behind it are what’s making it really special!

If you are interested in this sort of content, you can watch our other factory tours here.

The main camera used for filming during this trip is the FUJIFILM X-S20.

Many thanks to the people at Libec who were kind enough to open their company doors to us.

I hope you enjoyed this factory tour, and don’t forget to let us know in the comment section below what other companies you would like to see featured in the future!

]]>
https://www.cined.com/libec-tripods-and-camera-support-systems-crafted-in-japan-factory-tour/feed/ 3
No Camera High-Res Enough for Apple Vision Pro, According to Canon https://www.cined.com/no-camera-high-res-enough-for-apple-vision-pro-according-to-canon/ https://www.cined.com/no-camera-high-res-enough-for-apple-vision-pro-according-to-canon/#comments Wed, 13 Mar 2024 10:30:22 +0000 https://www.cined.com/?p=329356 With 23 million pixels spread across two 4K displays (one per eye), the Apple Vision Pro features an incredibly detailed display. Combine that with the ability to look around the spatial virtual reality environment, and now you are facing an exorbitant amount of pixels required to fill this insatiable hunger for resolution and details. Such high demands lead to a problem in finding a camera that may opt to fulfill them. Canon representatives claim that no camera available today can do it.

In a recently published interview, PetaPixel’s Jaron Schneider discussed these challenges with a contingent of Canon executives. They’ve all emphasized the importance of augmented and virtual reality (AR, VR), and they’ve also claimed no camera today can provide such a demanding video feed. While this claim is probably mathematically true, I’d like to dissect it, because in my eyes things aren’t so simple or decisive.

The Apple Vision Pro displays are only one small component

While the dual displays are impressive in their own right, it’s the spatial nature of the Apple Vision Pro that really raises the bar here. Once the user moves their head, new visual information streams across into the twin displays. As long as this information comes from the device’s camera array there’s no problem, but if it originates from an external imagery or video then the source material must cover an incredible amount of detail.

Apple Vision Pro. Image credit: Apple

Let’s not forget – augmented or virtual reality has more than two dimensions, meaning that getting closer or farther away from the image should also be considered and accounted for when creating footage for the Apple Vision Pro, as well as the refresh rate of the system. These technical specifications pose a great challenge for aspiring creators looking to move into this niche.

What tools do we have today?

Canon itself produces a dedicated AR/VR set with its RF 5.2mm f/2.8 L Dual Fisheye 3D VR Lens and R5 or R5C cameras. This unique lens projects two image circles over the 8K sensor. Working on a single sensor bypasses any synchronization issues associated with multi-camera solutions, but compromises raw resolution. This system is probably the most compact, outdoor-ready system capable of adequate results.

Canon RF 5.2mm f/2.8 L Dual Fisheye 3D VR Lens projection (simulation). Image credit: Canon.

Other options are also available. Canon’s executives claim the required benchmark for the Apple Vision Pro revolves around 14K (or about 100 Megapixels) While such cameras are hard to find, there is the Insta360 TITAN that comes pretty close. With 8 Micro Four Thirds sensors overlapping the spherical camera body, this one captures an impressive 11K 360 VR footage.

These are a lot of pixels, generating a lot of data. The TITAN uses 9 memory cards. Eight are assigned to the eight different cameras while the ninth collects additional sync information, gyroscopic motion stats, etc. Here we encounter another challenge – data throughput.

Extreme alternatives

If there’s a will there’s a way, and everybody wants a piece of this new tech. One rather extreme solution that comes to mind is Sphere’s Big Sky Camera in all its 18K glory. As perfect as this solution may sound, we’re talking about one of the rarest cameras in the world with very demanding operational requirements. Oh, and no depth perception… Other options are camera assemblies. These may provide adequate quality depending on the cameras used, but they are even more operationally demanding and require a high level of expertise both in terms of videography and in terms of the electro-mechanical rigging required for such contraptions.

Big data

The higher the resolution, the “bigger” the data. Though Apple surely compresses the footage coming to the Vision Pro most efficiently, there’s no way out of the raw resolution requirements dictated by such a device. Producing, editing, and sharing this kind of digital information is a daunting task, at least at this point in time (and technology)

AI. Must mention AI.

It seems like almost every tech-related article must mention AI in some way, but it’s actually very relevant here. AI-based tools possess some unique abilities. I’m not talking about the hyped video-generating tools such as SORA and LTX Studio. A day may come (could be tomorrow…) when these generators are up to the task, but as of today, they will probably generate more problems than solutions. The relevant tools here are resolution enhancers, which have been pretty efficient in recent years and have been here long enough to provide professional-level results with adequate consistency and reliability.

Peak performance is rarely required

Don’t we all just love 8K? Don’t you take photos and videos of your kids (or groceries) with the highest resolution possible? Of course not. Peak performance is very nice. Sometimes required, but most of us do most stuff at lower settings, and this applies to much more than filmmaking, to be fair. The same will probably go for the Apple Vision Pro. As nice as immersing ourselves in the Grand Canyon while editing or emailing may be, we don’t need every image to cover entire virtual surroundings. As I see it, one of the Apple Vision Pro’s biggest features is its multitasking ability. So no, I probably won’t be able to fully immerse myself in my recent music video, documentary, or feature, but I believe it’s good enough for now, and beyond. Immersive AR/VR content still has some major non-technical hurdles to leap over before it is fully immersed into the mainstream (pun intended).

Creating spatial content professionally

For those who wish to create specific high-res spatial content, I sadly bear no significant news. Gear-wise, alternatives are scarce, quite expensive, and require a considerable amount of expertise. Alas – early adopters pay a premium, but those who brave these challenges may place themselves in a unique position for the future.

Do you see yourself creating dedicated content for the Apple Vision Pro? Do you think AR/VR will become mainstream content consumption devices in the near future? Let us know in the comments.

]]>
https://www.cined.com/no-camera-high-res-enough-for-apple-vision-pro-according-to-canon/feed/ 4
Dear Nikon… – A Wishlist of Features We’d Like to See on Future RED Cameras https://www.cined.com/dear-nikon-a-wishlist-of-features-wed-like-to-see-on-future-red-cameras/ https://www.cined.com/dear-nikon-a-wishlist-of-features-wed-like-to-see-on-future-red-cameras/#comments Tue, 12 Mar 2024 14:42:23 +0000 https://www.cined.com/?p=330328 In a shocking move, Japanese tech giant Nikon recently announced the acquisition of RED Digital Cinema, thus sparking all sorts of reactions within our filmmaking world. As the dust begins to settle, we took some time to objectively analyze the state of technology on both sides of the Pacific Ocean, and draw up a wishlist of features we would like to see implemented in future RED (and Nikon) cameras. By the way, don’t hesitate to let us know your own expectations by answering our latest poll on the topic here.

The unexpected announcement seems to have split the world in half. On one side, some are afraid that a traditional company like Nikon can somewhat alter the rebellious, disruptive nature of RED. On the other hand, a team of opponents affirm that this merger can only lead to positive outcomes, tech-wise speaking. To be honest with you, I tend to lean towards the latter group.

Having worked with both RED DSMC3 cameras and the Nikon Z 9, I got to know the pros and cons of each system firsthand. Thus, I feel confident saying that the two brands offer different tools for such different jobs that the technologies at their disposal have the potential to come together in a perfect alignment of stars.

Following the acquisition, the president of RED, Jarred Land, informally jumped on a spur-of-the-moment conversation with Scott Balkum and Phil Holland (video above). As he puts it in this insightful talk – that you should truly watch in its entirety – this marriage “gives both companies exactly what both were lacking”. But before investigating how Nikon and their now wholly-owned subsidiary RED could create a thriving common ground of technologies, let’s get…

…the infamous compressed RAW patent dispute…

…out of the way first. As you probably know, RED holds a patent over in-camera RAW video compression, and often initiated legal proceedings against their competitors, including Nikon, for infringing on their intellectual property. As a result, the Californian brand has long been accused of holding back the entire industry with their patent.

RED and Nikon battled in court over the infamous compressed RAW patent
RED and Nikon battled in court over the infamous compressed RAW patent. Image credit: Francesco Andreola / CineD

However, in his first video appearance after the acquisition, Jarred Land affirms that the recent events have “nothing to do with the patent stuff”. He adds that:

“every company that matters, we’ve already licensed to. You haven’t heard about most of them […]. Companies develop and keep things confidential, but all those people that are mad that their company doesn’t have compressed RAW… they actually do. They just haven’t shown, or developed, or released it yet”.

Jarred Land – President of RED Digital Cinema

Of course, this statement represents a complete shift in perspective and hints towards interesting times industry-wise for camera manufacturers.

Dear Nikon, please put these tools into a RED camera

Now, without any further ado, let’s open up some room for the imagination. So, “Dear Nikon…”, here’s my personal wishlist of features – in no particular order – that I’d like to see in the next generation of RED cameras:

1) Z mount with locking ring and electronic communication
Although there’s no official information about the adoption of Nikon’s proprietary lens mount for future RED products, my feeling is that, sooner or later, we’ll need to be prepared for this transition. In the end, it’s in Nikon’s best interests to sell their ever-growing collection of Z mount glass.

Undoubtedly, this transition may be painful for those who already invested in a series of Canon RF mount lenses. However, most people are already adapting PL mount glass on their RED DSMC3 bodies, and the switch to Z mount won’t happen overnight anyway.

Nikon Z lens mount system
Nikon Z lens mount system. Image credit: Nikon

On the other hand, the Z mount system now has an opportunity to finally prove itself to be the most flexible solution on the market. With a flange distance of only 16mm, it allows you to adapt pretty much every lens that was ever produced on this planet. So, for me, it’s hard to look at this probable change to Z mount as a bad thing.

RED's standard PL to RF adapter with support for lens data
RED’s standard PL to RF adapter with support for lens data. Image credit: RED

Moreover, the adoption of a Z mount with electronic communication would allow RED to replicate what they did with their RF to PL mount adapter for DSMC3 cameras,  supporting lens data communication for high-end cinema applications.

2) Nikon’s hybrid autofocus technology
As a natural consequence of a possible shift to Z mount, future RED cameras could borrow better hybrid AF technology from Nikon. RED already dipped their toes into the “autofocus sea” with their DSMC3 cameras, including features such as in-app (original KOMODO) or in-camera (V-RAPTOR / KOMODO-X) face detection AF, which can be very useful for solo shooters. However, developing this tool from the ground up is a time-consuming, resource-intensive task.

The RED V-RAPTOR and KOMODO-X offer in-camera face detection AF
The RED V-RAPTOR and KOMODO-X offer in-camera face detection AF. Image credit: RED / CineD

On the other side, Nikon’s hybrid phase-detection/contrast AF technology is a much riper fruit that has all it takes to compete with the likes of Canon’s Dual Pixel or Sony’s Fast Hybrid AF. Compared to RED’s system, Nikon offers much more granular control over AF settings, better tracking options, and most importantly, already delivers incredible results – as you can tell from the test clip below that I shot with the Z 9 and the NIKKOR Z 100-400mm f/4.5-5.6 VR S.

Quick autofocus test shot on Nikon Z 9 with NIKKOR 100-400mm f/4.5-5.6 VR S. Credit: Francesco Andreola / CineD

3) H.265 10-bit encoding
While the patented REDCODE RAW is a fantastic format to work with, and all DSMC3 cameras already offer ProRes as an alternative, RED users could benefit from the addition of H.265 encoding for at least 3 different reasons: a) it would provide more manageable file sizes for projects that don’t require RAW, b) it could help streamline proxy workflows, c) it could speed up Camera-to-Cloud uploads.

The Nikon Z 9 offers a variety of video file formats - including N-RAW, ProRes RAW, ProRes, and H.265
The Nikon Z 9 offers a variety of video file formats – including N-RAW, ProRes RAW, ProRes, and H.265. Image credit: Nikon

4) In-body stabilization
Another Z 9 feature that I find extremely useful, especially when working with long lenses, is its 5-axis in-body image stabilization. With the BURANO, Sony proved that it is possible to combine a PL mount, IBIS, and a variable electronic ND filter into the same body.

The Sony BURANO is the world's first camera to combine a PL mount, IBIS, and a built-in variable ND filter
The Sony BURANO is the world’s first camera to combine a PL mount, IBIS, and a built-in variable ND filter. Image credit: Sony

Coming pretty late to the party, the V-RAPTOR XL is the first RED camera to keep PL-mount and electronic NDs under the same roof. However, IBIS is still a missing piece for RED. And considering how hard of an engineering challenge it is to put all the pieces of the puzzle together (it took Sony 5+ years to develop what you see in the BURANO), only a big corporation like Nikon can give RED the boost it needs here.

V-RAPTOR XL is the first RED camera to feature built-in ND filters
V-RAPTOR XL is the first RED camera to feature built-in ND filters. Image credit: RED

5) Smart video-centric features
Via regular firmware updates, Nikon equipped their flagship Z 9 camera with a series of video-oriented features that would be interesting to see in a proper cinema camera.

Among my favorites is the “Hi-Res” digital zoom function – added with firmware V3.0. It allows you to exploit the sensor’s 8K resolution to digitally zoom in while capturing 4K or FullHD footage, effectively turning prime lenses into parfocal zooms. This could be a nice match for ProRes, or possibly H.265, recording on future RED cameras.

The Nikon Z 9 offers 4 different methods to use the digital "Hi-Res" zoom function
The Nikon Z 9 offers 4 different methods to use the digital “Hi-Res” zoom function. Image credit: Nikon

Or the “Auto Capture” function – introduced with firmware V4.0 – that enables you to pre-program the camera to automatically trigger when 3 user-determined criteria are met (Motion / Distance / Subject Detection). Pair this with RED’s existing pre-record function, and you have the ultimate tool for sports and wildlife videographers.

Auto Capture settings on Nikon Z 9
Auto Capture settings on Nikon Z 9. Image credit: Nikon

6) Sensor shield protection
As small of an improvement as this may seem, having a shield to protect the image sensor from dust or small debris can be a game-changer. On the Z 9, this function only kicks in when the camera is turned off, but it’s still a nice-to-have feature when performing lens changes out in the wild. Currently, no cinema camera offers a similar solution.

Sensor shield on Nikon Z 9
Sensor shield on Nikon Z 9. Image credit: Nikon

7) Improved boot-up times
While this may not be a top priority for RED shooters often working in a studio or a sound stage, I’m sure it is for people like me who often shoot sports in remote outdoor locations, when you need to save battery life and cannot afford to keep your camera on all the time or bring an extra set of batteries.

While following the 12G SDI protocol has never been a huge issue for me, having to wait 40 seconds for my KOMODO to boot up is just tedious, whereas with other cameras like the Sony FX6, for example, you’re pretty much up and running with the flip of a switch.

What about a Nikkor cine lens set?

One of Nikon’s greatest strengths is their 100+ years of expertise in the development of optics. Many vintage lens enthusiasts are constantly after the best copies of their AI-S lenses. But the Japanese company already boasts some impressive glass in their modern Z-mount lineup, like the NIKKOR Z 58mm f/0.95 S Noct or the 135mm f/1.8 S Plena – that I recently had the opportunity to test.

Nikon plans to introduce 50+ Z mount lenses by 2025
Nikon plans to introduce 50+ Z mount lenses by 2025. Image credit: Nikon

Although these lenses are designed with a stills-first approach and are not strictly related to my wish list of camera features, a rehoused, cinema-oriented version of Nikon S-Line primes (and zooms) would make for an impressive set of optical tools. Autofocus cine lenses, anyone?

Thoughts on Nikon’s future

To level the playing field a bit, let’s top off our conversation with a few considerations on the impact that this acquisition might have on future Nikon products. Personally, I don’t think we’ll ever see a cinema, box-style camera from Nikon, at least in the near future – as this could potentially damage their recently acquired subsidiary. However, there are a few aspects that are worth considering when thinking about future Z mount hybrid mirrorless cameras. 

  • Global Shutter
    While Sony jumped on the highest step of the podium by launching the first full-frame camera to ever feature a global shutter sensor with the a9 III, Nikon now has the opportunity to exploit RED’s hard-earned skills in this field.

    The a9 III scored pretty decent results in our Lab Test for a global shutter camera. However, after the KOMODO experience, RED now truly seems to have mastered this technology, as they can confidently say that their newly launched V-RAPTOR [X] with a global shutter has almost the same dynamic range as the original, rolling shutter Raptor (our Lab Test here).
RED V-RAPTOR [X] features the first 8K Vista Vision global shutter sensors in a cinema camera
RED V-RAPTOR [X] features the first 8K Vista Vision global shutter sensors in a cinema camera. Image credit: RED
  • What’s the future of N-RAW?
    Although Nikon now has a clear path to further develop N-RAW, I have mixed feelings about the future of this format. On one hand, Nikon is rumored to be working on a better log curve for N-RAW and has also improved their N-Log LUT along the way to provide better results.

    Still, N-RAW is currently only supported in DaVinci Resolve, and quite surprisingly, Principal Product Manager for Adobe Audio & Video – Fergus Hammond – recently announced that they have paused work on adding N-RAW support to their products. This, of course, makes me wonder if Nikon has new plans on the horizon for in-camera RAW video compression.
Adobe pauses work on Nikon N-RAW support
Adobe pauses work on Nikon N-RAW support. Source: Adobe Community
  • Possible video improvements for future Nikon mirrorless cameras
    With the Nikon Z 9 and Z 8, Nikon proved that they could compete in a league that they previously struggled to get into. However, their cameras are still missing quite a few essential functionalities for video shooters. Shutter angle, anamorphic de-squeeze, and a proper false color exposure tool are just a few examples. Luckily, Nikon’s firmware update game is strong, so we might be able to see some of these features come to the Z 9 and Z 8 in the next few months.

What do you think of Nikon acquiring RED? What features would you like to see in the next generation of RED and Nikon cameras? Don’t hesitate to let us know your thoughts in the comment section down below!

]]>
https://www.cined.com/dear-nikon-a-wishlist-of-features-wed-like-to-see-on-future-red-cameras/feed/ 20