Industry | CineD https://www.cined.com/industry-insights/ Thu, 28 Mar 2024 06:18:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 DJI Response to Countering CCP Drones Act Published https://www.cined.com/dji-response-to-countering-ccp-drones-act-published/ https://www.cined.com/dji-response-to-countering-ccp-drones-act-published/#comments Thu, 28 Mar 2024 06:18:04 +0000 https://www.cined.com/?p=331598 DJI and the American government feud continues as additional actions accumulate, narrowing the Chinese drone manufacturer’s options. In a recent blog post, DJI tries to debunk some basic arguments used in the recent Countering CCP Drones Act. This episode is just the latest in this ongoing conflict involving two global superpowers.

The recent Countering CCP Drones Act by Congresswoman Elise Stefanik and Congressman Mike Gallagher may prove disastrous for leading drone maker DJI. It won’t be the first clash between the American government and Chinese companies and will probably not be the last. “DJI drones pose the national security threat of TikTok, but with wings,” Congresswoman Stefanik wrote in a statement on her website. It seems both sides have valid arguments and concerns, as well as other interests in this case.

DJI Air 3 drone. Image credit: CineD

Justified concerns – DJI Countering CCP Drones Act

The United States representatives’ concerns revolve around data collection, privacy issues, and national security. While some may dismiss such arguments as conspirative, lacking evidence, etc. they’re not completely far-fetched. The CCP is not famous for its transparency regarding its relationship with Chinese corporations, and the Shenzhen Da-Jiang Innovations Sciences and Technologies Company (aka DJI) is, in fact, a Chinese corporation. USA representatives may be deemed as paranoid, but being paranoid doesn’t necessarily mean you are not being stalked.

DJI’s drones are much more than flying cameras

As a leading drone manufacturer and an innovative company in general, DJI is exposed to some serious allegations. Affordable, off-the-shelf, semi-autonomous aircraft can be used for more than establishing shots.

As the world’s most prominent drone maker, DJI potentially bears some level of responsibility for actions and deeds done using their products. But then again, it’s quite unfair to blame the company for military adaptations and weaponization done to their products by third parties or end users. I don’t have the information or tools to judge here, and neither do most of us, I assume.

DJI Countering CCP Drones Act – Data is power

The Countering CCP Drones Act, however, doesn’t seem to originate from these concerns. As for the aforementioned act, the flying camera functionality (and geographic orientation linked with it) is what raises concern. Millions of these are roaming around the world, and it seems like some American Congress representatives are not convinced that the footage stays in the confines of the drone or wherever the operator downloads and publishes it. In their blog post, DJI counters the arguments. The company denies any involuntary data collection as well as any other allegation. According to this blog post, DJI is not compelled by the Chinese government to assist in espionage, is not a Chinese military company, and does not take part in atrocities around the world. Regarding the last point, DJI points out their inability to track each and every action carried out by their off-the-shelf products, and their intensive development of safety and security systems.

Compact, off-the-shelf, carry everywhere. Image credit: CineD

Innovative, liberal ideology, or lip service?

DJI warns that the Countering CCP Drone Act will hurt competition, innovation, and the drone community. Fair arguments indeed. However, I must point out the use of American core values, and the use of democratic and capitalistic terminology to counter an act aimed at protecting those very values and the nation. Respectively, those are Republican representatives using pro-regulation claims. These contradictions emphasize the complexity of this situation, sprawling further and deeper than the professional scope I’m able to cover. From my professional visual creator stance, a loss to DJI is a loss for the entire industry, as this company cemented themselves into the field with numerous innovative and even groundbreaking products.

Who do you think is right about this conflict? Are you afraid of the growing power of DJI, or is it excessive governmental power that gives you shivers? Let us know in the comments.

]]>
https://www.cined.com/dji-response-to-countering-ccp-drones-act-published/feed/ 1
FUJIFILM GFX 100s – $1,600 Discount Makes Medium Format More Accessible Than Ever https://www.cined.com/fujifilm-gfx-100s-1600-discount-makes-medium-format-more-accessible-than-ever/ https://www.cined.com/fujifilm-gfx-100s-1600-discount-makes-medium-format-more-accessible-than-ever/#respond Wed, 27 Mar 2024 13:20:00 +0000 https://www.cined.com/?p=331972 FUJIFILM has recently launched a $1600 discount on their successful GFX 100s medium format (44x33mm) camera. This brings the price down to $4,400 for a 100-megapixel medium format camera. For someone who’s been around since the early days of the digital revolution, such a low price on such a magnificent sensor is mindblowing. But if that’s a surprise to you, you weren’t paying attention to what FUJIFILM and others have been doing for the last few years. Let’s dive in!

“Medium Format” and “Accessible” are two terms rarely found in the same sentence. Digital medium format picked up where analog medium format ended. Most initial entries were digital backs. Various sensor sizes and formats were mounted onto cumbersome electronic backs, then mounted to analog medium format cameras. This method enabled a relatively smooth systematic transition, and also a hybrid operation with digital backs working as indicators while the film was used for the higher-quality end result. But it had some significant issues.

Modular mayhem

Early digital backs required significant electronic prowess, not always within reach of film camera manufacturers. Various electronic companies entered the market, creating digital backs with various levels of compatibility. Such were Leaf, Imacon, and Phase One with the latter still leading the high-end segment of this market today.

Phase One’s evolution from scanning backs to the XF system.

This practice brought about some compatibility challenges. In these early eons, compact storage media couldn’t support massive file sizes. Many digital backs didn’t even bother with memory cards and were limited to either tethering or storage magazines. This meant one set of batteries for the camera, another type for the back, and an additional one for the magazine. Furthermore, many medium format cameras didn’t have any electronic connections to sync to the back and sensor, requiring additional cables.

Digital medium format back workflow (with a relatively modern specimen with battery and CF card included.

These traits, originally made for seamless integration, made medium format photography extremely cumbersome, unwieldy, and expensive. The change started in 2010.

Pentax unionizes the format

Most if not all medium format manufacturers moved towards more unified, proprietary systems, but it was Pentax who introduced the world to the first unified body, medium format camera. Utilizing their vast experience with both analog medium format cameras as well as digital SLRs, the company created the Pentax 645D.

A young anonymous YouTuber (at that time) recognizes Pentax’s achievement with the 645D.

A long-time innovator in the analog medium format days, the company has implemented various technologies into the format. Coming from 35mm cameras, some of those were considered “Hobbyist” or “Amateur”. These functions included smart light metering, advanced autofocus, etc.

The Pentax 645n, an innovative analog medium format camera.

Pentax did just the same with the 645D – the company didn’t refer to it as a “medium format” camera, disregarding the fuss and technical snobbism, thus blurring the line between medium format and 35mm/full frame. Oh, and Pentax also slashed the price with the first $10K medium format digital camera ever.

Blurring boundaries

A distinct line is drawn between the Pentax 645D to the FUJIFILM GFX 100s. Both cameras shamelessly take advantage of every available technology and feature, utterly disregarding its “amateur” reputation. Both come from manufacturers deeply invested in APS-C cameras. Both come from relatively small manufacturers with a strong reputation for innovation in some unexpected ways. There have been many evolutionary changes since 2010. Pentax debuted the use of CMOS sensors in medium-format cameras with the 645Z. This brought extreme ISO settings for the first time with medium format, high dynamic range, and basic video capabilities. The camera was widely adopted (in medium format standards) and even caught the eye of some known cinematographers.

This may just be the first video review of a medium-format camera.

FUJIFILM carried the format into the mirrorless age, later followed by Hasselblad. The GFX 50S and the GFX 50R introduced a new level of “handhold ability”, being just slightly larger than professional DSLRs. The GFX100 was arguably the first medium format camera designed with professional videography in mind.

Though earlier cameras included video capabilities, I’ll argue that it was more of an afterthought than a design choice. The GFX 100s also incorporated the first mirrorless phase-detect autofocus system, including scene recognition and subject tracking. But none of those innovations was as important as the core conceptual shift that started with the Pentax 645D and peaked with the FUJIFILM GFX 100s. The concept of medium format cameras is a natural evolution of high-end full-frame cameras rather than a separate segment.

FUJIFILM GFX 100s. Image credit: CineD

The final frontier

The FUJIFILM GFX 100s wasn’t the first to offer over a 100-megapixel sensor, nor was it the first 4K 10-bit capable medium format camera. We even had stabilized 44×33 sensors, phase detect autofocus, compact unified camera body, and more in other medium format cameras. But the GFX 100s was the first at one significant front – Price. As of its debut, the camera’s price aligned with high-end full-framers. The Sony A1, announced in the same week as the GFX 100s, was actually $500 more expensive at $6,500. Up until this point, medium-format cameras were somewhat excluded from the mainstream market. Even when priced around $10K, one would have to use the uniqueness trump card to justify the purchase. The same justification applies to other unique gear such as Leica M cameras, analog filmmaking, large format photography, vintage lenses, etc. When prices align, a fundamental segmentation shift occurs, and that’s the story of the landmark FUJIFILM GFX 100s.

The final chapter for the GFX 100s?

Announced in January 2021, it seems that a GFX 100s replacement is due soon enough. This, and the $7,500 GFX 100 II may be the possible motivations behind the current $1600 price drop.

With its new $4,400 price tag, the GFX 100s assumes an extremely competitive position among high-resolution cameras. The likes of the Sony A7R V, Nikon Z 8, Canon EOS R5, or the Leica SL3 may out-perform it in terms of agility, burst speed, and autofocus, but all falls short when it comes to their defining feature – still image quality. The 44x33mm 102-megapixel BSI CMOS sensor is still untouched in terms of resolution, offering significantly higher image quality compared to its full-frame peers. Unlike medium format cameras of old, this comes with just a minimal toll regarding speed and operability. The FUJIFILM GFX 100s is fast enough for various genres. It boasts incredible high ISO performance, can track faces and eyes with adequate accuracy for portrait or event photography, and offers decent 4K 10-bit recording. I personally use it for my high-res landscapes, museum-level reproductions, architecture, family, and occasional wedding photography. And is also my choice for YouTube videos.

No longer niche

Like some of its forebears, the GFX 100s is a landmark camera – another step in the long journey medium format has made it into the mainstream market. While still rather unique regarding still image quality, this camera is a true representative of the hybridization and democratization evolution. While the GFX 100 II is not its direct predecessor, this camera also walks the path, and so I assume the next generation will. We’ll have to wait and see if those will be as influential as the GFX 100s, which is now available for an amazingly affordable price.

Another step on the same path – FUJIFILM GFX 100 II. Image credit: CineD

Price and availability

The FUJIFILM GFX 100s is now available for no more than $4,399 in the USA and various similar offers across Europe. We can’t be too sure as to the duration of this sale. My guess is that it will continue until stocks deplete, but I have nothing but my common sense to base it on.

Will you consider this high-end stills camera for your work or play? Do the basic but good video specs satisfy you? Let us know in the comments.

]]>
https://www.cined.com/fujifilm-gfx-100s-1600-discount-makes-medium-format-more-accessible-than-ever/feed/ 0
OpenAI Sora – First Videos Generated by Beta Testers Released https://www.cined.com/openai-sora-first-videos-generated-by-beta-testers-released/ https://www.cined.com/openai-sora-first-videos-generated-by-beta-testers-released/#comments Tue, 26 Mar 2024 16:09:09 +0000 https://www.cined.com/?p=332223 More than a month has passed since OpenAI announced their new AI video generator, but the discussions around it won‘t calm down. Apart from showcasing what Sora is capable of, the developers also granted beta access to different artists and filmmakers. Yesterday, the company shared their early impressions and some of the creative works, including the very first generated short film. Ready to see OpenAI’s Sora in action? Then keep reading, but beware that it might ignite mixed feelings (as it does in us).

In the post, OpenAI states that they’ve been “working with visual artists, designers, creative directors, and filmmakers to learn how Sora might aid in their creative process.“ Although they admit that their deep-learning model still needs many improvements, the following results offer a glimpse into the future that’s waiting for us around the corner. Is everything really as positive as all the beta-testers’ published thoughts are?

OpenAI’s Sora in action: entire short films

Of all the visual works chosen by OpenAI for their impressions publication, one stands out. It’s the short film “Air Head” by a small Toronto-based multimedia production shy kids. Creators decided to tell an original story of a balloon-headed man, using him as a metaphor for their own heads, filled with so many ideas they might pop. Take a look at it, keeping in mind that all the visuals here were made by AI:

Surely, it’s not the most consistent video you’ve seen in your life. Also, as the protagonist lacks a head, we cannot judge Sora’s ability to communicate relatable emotions and keep the character’s faces intact. Yet, in general, the idea works, and the initial story comes through.

As great as Sora is at generating things that appear real, what excites us is its ability to make things that are totally surreal. A new era of abstract expressionism.

Walter Woodman from shy kids, the director of “Air Head”

The feedback from artists and digital creators

Not only filmmakers were invited to look at Sora in action. A couple of artists and digital creators also took part in beta tests. For example, Josephine Miller, a Co-Founder and Creative Director of London-based Oraar Studio, which specializes in the design of 3D visuals, augmented reality, and digital fashion. Her experiment resulted in a futuristic outfit concept:

Josephine found that the main advantage of working with Sora was the ability to make rapid concepts with a high level of quality. Consequently, this tool enabled her to translate her imagination into something visual “with fewer technical constraints”. Multidisciplinary artist August Kamp agrees. Her quoted thoughts on OpenAI’s webpage are very similar, revolving around the freedom of artistic expression that Sora offers.

While it’s sweet to read inspired thoughts and lovely impressions, I can’t help but wonder: was all the feedback from beta testers so positive? Not so sure. A lot of artists are intimidated by AI and have noticed how it has already taken away some of their paid gigs. Remember a huge discussion sparked by the development of image generators like Midjourney? Using such tools as Sora for quick concept visualization is one thing. Selling its results as the final product is a completely different story.

OpenAI’s Sora in action: other applications

What I found interesting though is that some of the posted impressions show how OpenAI’s Sora in action could be helpful for other purposes. For example, sculptor Alexander Reben used the AI tool as a starting point to develop a 3D sculpture:

A multidisciplinary creator Don Allen III, (who worked for DreamWorks Animation, for example), also mentions the possibility of using Sora for prototyping:

For a long time, I’ve been making augmented reality hybrid creatures that I think would be fun combinations in my head. Now I have a much easier way of prototyping the ideas before I fully build out the 3-D characters to place in spatial computers.

Those two examples show that OpenAI’s Sora might be utilized for purposes other than just generating video content. As there are already AI tools on the market that allow the transfer of videos into, say, 3D models (LumaAI is one of them), the combination of those with Sora opens up new prospects for creators. If you want to see all the published work and early impressions of OpenAI’s Sora, head over here.

Mixed reactions from the creative community

Let’s come back to the short film “Air Head” real quick and scroll through the comments on the shy kids Instagram account where it was published. A lot of the reactions show excitement, awe, hype, and supportive emojis. However, if you take a closer look, you will also find enough critical responses and a bunch of important questions. Some of the commenters worry about the future definition of an artist, and whether it will be whoever shared or saw something first. Others point out more pressing matters:

one of the critical comments to the generated video
Image source: Instagram of shy kids

While for some people, seeing OpenAI’s Sora in action means completely new and mindblowing possibilities for independent creators, others disagree, saying it will also lead to the loss of jobs. There is also a third kind of response that reflects mixed feelings. I guess it’s also the one I can relate to the most:

“We are going to be so saturated with art on every medium soon (music is coming), artists like me are starting to wonder how our value is determined and our need to express will be viewed. I’m both heartbroken and in awe, but get no personal pleasure in creating something with AI because I didn’t actually create it, and the world won’t know or care in the end as well.“

From one of the comments to the “Air Head”

The ethical question is still in the air

A further critical question that hasn’t been resolved so far is what sources OpenAI trains Sora on. This is generally a grey area in the realm of generative AI. That’s why when the company’s CTO Mira Murati couldn’t publicly answer whether they used YouTube videos or not, another round of heated discussion was launched. You can read about it here.

So, now that we’ve seen OpenAI’s Sora in action, what do you think about it? Do you agree with the first impressions of artists who tested it? Could you imagine integrating this tool into your projects? If so, how? Let’s discuss it in the comments below, but please, stay kind to each other.

Feature image: a collage from the visual works, generated by Sora. Source: OpenAI.

]]>
https://www.cined.com/openai-sora-first-videos-generated-by-beta-testers-released/feed/ 47
“Civil War” Feature Film by Alex Garland Shot on the DJI Ronin 4D https://www.cined.com/civil-war-feature-film-by-alex-garland-shot-on-the-dji-ronin-4d/ https://www.cined.com/civil-war-feature-film-by-alex-garland-shot-on-the-dji-ronin-4d/#comments Fri, 22 Mar 2024 10:32:32 +0000 https://www.cined.com/?p=331327 During the SXSW 2024 annual conglomerate in Austin, Texas, director and screenwriter Alex Garland showcased the upcoming “Civil War” feature film and revealed that it was shot on the DJI Ronin 4D. Are you also curious to learn more about it? So, let’s dive straight into it!

The DJI Ronin 4D-6K was released in October 2021here’s our full video review in case you missed it – and it took the company an extra two years to finally launch the Ronin 4D-8K with the Zenmuse X9-8K camera. While this one-of-its-kind filmmaking device is impressive and can produce unique results thanks to its 4-axis stabilization, the Ronin 4D never really made it to Hollywood and feature films. Indeed, to this day, except for short films, commercials, music videos, and documentaries, the Ronin 4D is struggling to make it on large screens.

During South by Southwest (SXSW) 2024, screenwriter and director Alex Garland presented his new movie, “Civil War,” which stars Kirsten Dunst, Wagner Moura, Cailee Spaeny, Stephen McKinley Henderson, and Nick Offerman. Before diving deeper, let’s watch the movie’s official teaser, which will be released on April 12th, 2024.

Civil War – Shot on the DJI Ronin 4D

You got it; the story behind “Civil War” is easy to summarize: the movie follows a team of journalists who travel across the United States during the rapidly escalating second American Civil War. This anticipation/SF movie received good criticism during the SXSW world premiere.

In an interview with Empire, director Alex Garland revealed that they shot “Civil War” with the DJI Ronin 4D:

It does something incredibly useful. It self-stabilises, to a level that you control — from silky-smooth to vérité shaky-cam. To me, that is revolutionary in the same way that Steadicam was once revolutionary. It’s a beautiful tool.

Alex Garland

Alex Garland mentions that the camera was affordable at around $5000 – well, $6799 if we want to be precise – so we can deduce that they shot with the DJI Ronin 4D-6K. Since the movie will be available in theaters and IMAX, it tells us that Ronin 4D footage can go the extra mile perfectly.

DJI Ronin 4D 6K during our Lab Test
DJI Ronin 4D 6K during our Lab Test. Image Credit: CineD

Why choose the DJI Ronin 4D to shoot Civil War?

Every tool has its pros and cons, but Alex Garland found that the DJI Ronin 4D was the best tool for the job of shooting Civil War:

We knew we needed to shoot quickly, and move the camera quickly, and wanted something truthful in the camera behaviour, that would not over-stylise the war imagery. All of which push you towards handheld. But we didn’t want it to feel too handheld, because the movie needed at times a dreamlike or lyrical quality, which pushes you towards tracks and dollies.

The final part of the filmmaking puzzle — because the small size and self-stabilisation means that the camera behaves weirdly like the human head. It sees “like us.” That gave Rob (Rob Hardy B.S.C NDLR) and I the ability to capture action, combat, and drama in a way that, when needed, gave an extra quality of being there.

Alex Garland
DJI Ronin 4D Flex tether system. Image credit: DJI

The main selling points of the DJI Ronin 4D for Garland have been the flexibility and built-in 4-axis stabilization. Indeed, time is money on set, so the accumulated saved time on installing a dolly over several weeks can be huge at the end of the day. It also means that the team was faster in setting up and following the action, which can benefit the acting.

Will it be the beginning of a new trend and the start of more movies shot with the DJI Ronin 4D? Only the future will tell, but as Garland says, it is “not right for every movie, but uniquely right for some.”

Source: Empire

featured image credit: A24 / DJI (composition by CineD)

Did you already shoot content with the Ronin 4D-6K or 8K? Do you see yourself shooting entire projects with the Ronin 4D? Don’t hesitate to let us know in the comments below!

]]>
https://www.cined.com/civil-war-feature-film-by-alex-garland-shot-on-the-dji-ronin-4d/feed/ 5
Poll: Director of Photography, or a Cameraman/Woman – How Would You Describe Yourself? https://www.cined.com/poll-director-of-photography-or-a-cameraman-woman-how-would-you-describe-yourself/ https://www.cined.com/poll-director-of-photography-or-a-cameraman-woman-how-would-you-describe-yourself/#comments Thu, 21 Mar 2024 13:12:07 +0000 https://www.cined.com/?p=331381 In this week’s poll, we are very interested in finding out how you would describe yourself. Are you a Director of Photography/Cinematographer or a cameraman/woman? Although this looks like a simple question, please take a moment to describe yourself honestly.

Times are changing and what used to represent a “clear job description hierarchy” is no more. Director of Photography/Cinematographer used to be a title given to those who work on set, working closely with a director, and at the same time “managing” the surrounding creative workforce. We were “innocent enough” to believe that a “Cinematographer” would work on films for Cinema (as the name hints), but boy, it looks as if we were wrong. Currently, there are many “cinematographers” out there who have not shot a single frame for cinema. So why is this happening? Are people looking for a shortcut to gain recognition? Alternatively, let’s phrase it this way: Can a particular title enhance your self-promotion? One thing is for sure. The title “Cinematographer” sounds much more convincing than “Youtubegrapher”. And let us be clear here, we are producing a lot of content for YouTube as well, and know how much effort it takes to produce it.

One of the issues here is that everyone (and his mother) can call themselves whatever they like, as there is no “unified certification or standard”. Is this good or bad? Well, as always, it depends on who you are asking.

By the way, the same goes for being titled as a cameraman/woman, as this seems to be an extinct profession (because everyone is a DoP now)…

A cameraman/woman (or a lighting cameraman) used to be a respected profession – one that allowed the power of storytelling by understanding the equipment and, of course, the lights you were working with. In the old days of film, this was even more significant. However, the shift to digital and the capability to “instantly see what you get” in the viewfinder, coupled with the ability to playback and review results, marked the beginning of the democratization of the profession, and the rest is history.

So who are you? Are you calling yourself a Director of Photography/ Cinematographer BECAUSE you are working on cinema sets, or, are you using this title regardless of what you film and where your project will be shown? Or, are you a cameraman/woman who is happy to hold a camera and create beautiful images sans the desire to work in Hollywood or Bollywood? Moreover, are you true to yourself with the title you are using?

This poll uses Javascript. If you cannot see the poll above, please disable “Enhanced/Strict Tracking Protection” in your browser settings.

We would love to hear your thoughts about this topic so please be so kind and share it with us by voting in our poll, or better yet, leave a comment below.

]]>
https://www.cined.com/poll-director-of-photography-or-a-cameraman-woman-how-would-you-describe-yourself/feed/ 1
CineD Best-of-Show Award at NAB 2024 – Submissions Now Open for Manufacturers https://www.cined.com/cined-best-of-show-award-at-nab-2024-submissions-now-open-for-manufacturers/ https://www.cined.com/cined-best-of-show-award-at-nab-2024-submissions-now-open-for-manufacturers/#comments Wed, 20 Mar 2024 13:56:32 +0000 https://www.cined.com/?p=330870 With NAB 2024 just around the corner, our CineD team is gearing up for our largest editorial presence ever. As in previous years at both NAB and IBC, we will award several CineD Best-of-Show Awards once again at NAB 2024. For the first time ever, we invite manufacturers to submit their product innovations ahead of the show. This ensures we don’t miss anything amidst the craziness of the show. Additionally, we’ve redesigned the Awards trophy from the ground up. Read on to learn more!

In less than a month, the 2024 NAB Show will kick off in Las Vegas, and manufacturers around the world are already gearing up for the industry’s largest trade show in the world. CineD will be there with an even bigger crew, producing our usual video content. However, this year, our presence will be more extensive than ever before. We’ll be covering more technology news and products not only for YouTube but also directly for social media (Instagram, YouTube Shorts, TikTok). 

Submissions of products for consideration at the CineD Best-of-Show Awards at NAB 2024 are open now

In previous years, our Best-of-Show Award winners at NAB and IBC were chosen from the products we covered during the shows. If you missed them, you can check out the announcements for NAB 2023 and IBC 2023.

Beginning this year, we’re changing our approach. We’re now inviting manufacturers to submit their products in the month leading up to NAB. This way, we get a broader overview of relevant innovations and products that deserve our attention.

Seven categories for CineD Best-of-Show Awards at NAB 2024

CineD is accepting submissions to compete for the CineD Best-of-Show Award at NAB 2024 in seven categories:

  • Cameras
  • Camera Support, Control, and Accessories
  • Audio Equipment
  • Lighting Equipment
  • Lenses
  • AI Innovation
  • Streaming, Remote Production & Cloud Workflows

Newly designed Awards Trophy

Redesigned from the ground up, we’re thrilled to present our new CineD Best-of-Show Awards Trophy. Each winner in every category will receive this trophy in person at the 2024 NAB Show.

The new CineD Best-of-Show Trophy, to be handed out to winners for the first time ever at NAB 2024. Image credit: CineD

Submission process

To submit your product(s) or technology and be considered for a CineD Best-of-Show Award at NAB 2024, please head over to our entry form at Zealous which has all the details.

–> ENTER HERE.

For our full Terms & Conditions regarding entering this submission, please read this. Please note, there’s a small nomination fee for each entry, and there’s no restriction on the number of entries per manufacturer.

If you want to submit your product or technology that hasn’t been announced or released prior to NAB, we are happy to sign an NDA agreement before your submission. Simply send us the NDA via email, and we will return the signed form in due time.

Former CineD Best-of-Show Award Winners

Former CineD Best-of-Show Award winners include ARRI, Sony, Blackmagic Design, LC-Tec, DJI, Zhiyun, frame.io, FUJIFILM and many, many others. Please note that these winners received our former award design. If you want to read more about previous winners and our reasoning why we selected them as winners, head over to our winners announcement article from NAB 2023 and IBC 2023.

Any questions?

In case you have any questions about the process, please get in touch with us and we will get back to you as soon as possible.

]]>
https://www.cined.com/cined-best-of-show-award-at-nab-2024-submissions-now-open-for-manufacturers/feed/ 4
Canon’s 2024 Strategy – Interesting Hints and Speculation https://www.cined.com/canons-2024-strategy-interesting-hints-and-speculation/ https://www.cined.com/canons-2024-strategy-interesting-hints-and-speculation/#comments Wed, 20 Mar 2024 08:29:22 +0000 https://www.cined.com/?p=330236 Canon celebrates their 21st year as the world’s leading interchangeable camera systems manufacturer. Their 2024 imaging group strategy seems to point to some interesting trends and shifts, which include an attempt to establish an absolute position in the mirrorless market. Canon also notes a shift toward the experience of the audio-visual content consumer. The company will tackle these challenges, as well as efficiency and profitability challenges, with various methods and practices.

Canon boasts an established reputation, as they are no stranger to innovation and technological progress. The Company has maintained their place among the leading patent applicants in the USA for over a decade. Several important innovations gained Canon prominence in the photo-video industry, and the venerable EF mount is fundamental for many of them. The mount, launched in 1987, completely replaced its FD predecessor. Offering fully electronic camera-lens communication, it launched Canon’s system to their top position and they’ve maintained it ever since.

Canon EF & RF Lenses - Overview
Canon current EF & RF Lenses. Image credit: Canon

The inclusion of fully electronic communication and a focus motor in every lens made EF lenses relatively easy to fully adapt to other systems. Almost every modern mirrorless mount has an EF adapter, and most include autofocus and other advanced features. However, as long and interesting as Canon’s history may be, this article is about their future. So, what does Canon have in store for us?

Absolute position in the mirrorless market

While objective stats are hard to come by (and there is more than one way to measure them) Canon’s grip on the interchangeable lens camera market is firm, with about half of total sales attributed to it. Oceans rise, empires fall, but it seems Canon manages to remain on top of things. Still – the mirrorless segment poses a challenge, and Canon is opting to reinforce their control over it. According to their recent strategy document, the company will try to broaden the video-oriented crowd in both the social media content creator segment, as well as with “traditional” video professionals. As the company mentions “experience” as one of their top goals, and notes some of their more unique designs like the PowerShot V10, we may expect some interesting designs in the future.

Canon professional support

Canon’s 2024 strategy acknowledges the importance of continuous professional support. Canon is a strong performer in the professional segment. Some indication of that claim emerges from rental figures regularly published by Lensrentals. Though these figures are far from representing the entire market, they provide some quantitive indication. The professional market may not be as vast as the consumer market, but it holds secondary advantages regarding brand-based marketing. This will go hand in hand with the company’s continuous professional service and support.

Canon’s take on “experience”

Canon may not be the first manufacturer to offer a 3D-enabled interchangeable lens option. They do, however, offer the RF 5.2mm f/2.8 L Dual Fisheye 3D VR Lens, which is probably the most solid option for an interchangeable 360 VR kit around. This lens is rather niche in terms of mainstream interchangeable systems, but it’s far from being the company’s only entry into the world of viewing experience, and definitely not the most extreme.

Mixed reality

Canon mentioned the viewing experience as one of the major future shifts in the industry. As such, they made some strides in this regard. Canon’s MREAL X1 is a mixed-reality set. Mixed reality seems to be very similar to augmented reality (AR) as it combines the input coming from an internal camera array and virtual input to instill a sense of presence in virtual objects.

Canon MREAL X1 Mixed Reality headset. Image credit: Canon

The MREAL X1 is aimed mostly at industrial applications and currently lacks the finesse of other AR/VR sets. The view, as seen in the sample video, isn’t as smooth and can’t provide the same experience that recent competitors can. This is due to its different target audience that will probably value efficiency over seamlessness. As Canon’s officials recently claimed, current entries like the Apple Vision Pro require more resolution than any camera can provide. The MREAL X1 is a more “down to earth” solution.

Volumetric Video

Perhaps the most interesting prospect of Canon’s journey lies in their Volumetric Video. Volumetric Video is a method of motion capture incorporating a large number of synchronized cameras to both film and 3D map the scenario in real time. The outcome is a video-game-esque 3D environment depicting actual events.

Volumetric Video is extremely demanding in terms of hardware, software, and infrastructure. Such a system requires many cameras, a synchronized control, and exorbitant data throughput. Yet, the prospect of watching your next soccer, basketball, or football match with the ability to wander around, following your favorite player’s field of view, taking a bird’s-eye view, and then diving deep into the fray, is quite exciting.

Canon Volumetric Video infrastructure scheme. Image credit: Canon

Tradition of innovation

For most current creators, Canon is a “constant”. It was always there – a brand synonymous with image-making. This was achieved by continuous innovation. Canon was there during (and leading) the autofocus revolution. They brought market-leading cameras into the digital revolution and merged victorious. The company launched a mirrorless system on June 2012 but was still a bit late to the professional mirrorless turn of events. Once they did enter the market, they quickly harnessed their innovative prowess to churn out various lenses and cameras, now covering most niches and genres. Like it or not, Canon is among the most influential players in this game, and their strategy will probably affect us all in some way.

What do you think the future holds for motion capture and content consumption? Is Canon on the right track here, or is it a “Kodak moment”, when a major manufacturer strays from the needs of its audience? Let us know in the comments.

]]>
https://www.cined.com/canons-2024-strategy-interesting-hints-and-speculation/feed/ 2
YouTube to Require AI Labeling by Creators https://www.cined.com/youtube-to-require-ai-labeling-by-creators/ https://www.cined.com/youtube-to-require-ai-labeling-by-creators/#respond Tue, 19 Mar 2024 18:49:33 +0000 https://www.cined.com/?p=331224 AI-generated content has been on the rise in recent years. As it gains popularity and becomes more accessible, concerns rise. Following their announcement last November regarding responsible AI, YouTube now incorporates a new tool into YouTube Studio. This new feature will enable creators to disclose the less apparent AI use cases, such as voice alteration, face swap, or any AI-generated, realistic-looking scenes. As of now, it’s purely voluntary, based on the creator’s good faith.

As AI-generated content is more widespread than ever, authenticity concerns grow. YouTube now offers a new tool to mitigate some suspicions regarding content. The new labeling tool is voluntary, providing the creators’ community with a chance to build much-requested trust with their audience.

AI altered content tag tool on YouTube Studio. Image credit: YouTube

Authenticity is the key

With this new tool, YouTube aims to combat disinformation spread by manipulative use of AI tools. You’ll still be able to post your image riding a dragon or create fantastic landscapes. As long as the end image is deliberately unrealistic, there won’t be any problem in posting it; no AI labeling is needed. YouTube specifies the following use cases in which AI labeling is due:

  • Using the likeness of a realistic person: Digitally altering content to replace the face of one individual with another’s or synthetically generating a person’s voice to narrate a video.
  • Altering footage of real events or places: Such as making it appear as if a real building caught fire, or altering a real cityscape to make it appear different than it does in reality.
  • Generating realistic scenes: Showing a realistic depiction of fictional major events, like a tornado moving toward a real town.
AI altered content tag as seen in YouTube Shorts. Image credit: YouTube

Full coverage isn’t quite there yet

While we should commend YouTube’s move, there are still some major caveats. YouTube specification of AI alterations that won’t require tagging leaves some room for interpretation. “We also won’t require creators to disclose when synthetic media is unrealistic and/or the changes are inconsequential.” Although YouTube continues specifying some use cases, the term “unrealistic” still seems rather subjective. More than this, it’s the voluntary nature of this tool that may be its undoing.

The voluntary dilemma

Most creators will surely be decent and honest regarding the authenticity of their content. The importance of audience-creator trust can’t be overstated. It’s a small percentage of malevolent users that I’m worried about. This system still provides no solution for this, and, to be honest, I’m not sure there is another way to combat it at this point. The current level of AI-generated content will make any algorithm-based solution pretty difficult to achieve, and the consequences of automatic tagging may also pose a problem. A solution may lie in some sort of electronic watermarking, like C2PA, but it requires much more than a technological solution, as all social dilemmas do.

Do you believe such steps may help to combat disinformation and fake news, or are they no more than lip service? Let us know in the comments.

]]>
https://www.cined.com/youtube-to-require-ai-labeling-by-creators/feed/ 0
Adobe’s Project Music GenAI Control Previewed – a New Generative AI Tool for Sound https://www.cined.com/adobe-previews-music-genai-generative-ai-tool-for-audio-and-music/ https://www.cined.com/adobe-previews-music-genai-generative-ai-tool-for-audio-and-music/#respond Tue, 19 Mar 2024 10:49:18 +0000 https://www.cined.com/?p=329502 Adobe Project Music GenAI Control is a new generative AI tool for custom music, audio creation, and editing. GenAI takes a more custom, selective, and controlled AI approach. This is very similar to Adobe’s Firefly, or the recently announced LTX Studio. Project Music GenAI Control can create music, adjust music fed into it in various ways and forms, lengthen a clip, make variations, change the mood or vibe, and more. The tool, or tool set is being previewed but is not yet integrated into any working application or software.

There’s no need to re-emphasize the importance of music, audio, and sound to the cinematic creation. It can be a complete feature score or a background of toned-down music manipulating the vibe of an interview. Sound will set the pace of a commercial video or the suspense in either a wildlife documentary or a horror film. In recent years we’ve witnessed various advancements in this field, but Adobe’s recent Project Music GenAI Control has some interesting tricks up its sleeve.

Main features

All features are theoretical at this stage of technical demonstration, but even this early in the process Project Music GenAI Control can pull up some impressive capabilities. One tool can take an input clip and enhance it in various ways. A simple text prompt will add an “inspiring film” vibe to it. Additional instruments will now accompany the initial feed, broadening the musical impression. If you’re into country music, Hip Hop, or R&B, Project Music GenAI Control will happily transform a clip into these genres. The system also provides you with control over the intensity of the newly formed tune.

Generated audio

As well as adapting and editing your original input, Project Music GenAI Control can generate complete tunes, loops, themes, and so on. Just type your prompt and you have your royalty-free clip. At that point, you can edit it as much as you want. Project Music GenAI Control can also add a portion of generated music to an existing piece. We’ve all gotten to this point where the music is a bit shorter than the video it supports. This solution, if implemented right, will solve issues like this in no time.

AdobeGenAI prompts. Image credit: Adobe

Real-world applications

While only previewed, it’s pretty easy to imagine the effect of such tools on the film and video industries. While complete movie scores will probably stay out of Adobe’s Project Music GenAI’s reach, at least for the near future, its effect will ripple through the industry. In my eyes, the impact will mostly affect editing efficiency across the board. It will let editors manipulate the pace, feel, and vibe of their creation with the press of a button. We’ll be able to create different variants of every video, and it’s going to be fast enough to be used as a mundane testing workflow. Background audio will become as easy as typing a sentence, with no royalties or credits required. We’ll be able to fine-tune a single tune to various audiences just by typing a text prompt. Intriguing times indeed.

AdobeGenAI at work. Image credit: Adobe

Ethics and credentials

Unlike some other players in the field of generative AI, Adobe takes extra care regarding the licensing, credentials, and ethics of their products. The company is among the founding members of the Coalition for Content Provenance and Authenticity (C2PA), uniting software giants, key broadcasters, and camera manufacturers to create sustainable authentication protocols ensuring a level of trust in visual information.

Adobe is committed to ensuring our technology is developed in line with our AI ethics principles of accountability, responsibility, and transparency. All content generated with Firefly automatically includes Content Credentials – which are “nutrition labels” for digital content that remain associated with content wherever it is used, published or stored.

Adobe
Video Credit: The Content Authenticity Initiative

Is it the next generation of AI?

AI generators have come a long way in the last couple of years, and things seem to have accelerated recently. One recent shift is towards specific control over the final product. Adobe Firefly is probably the most prominent example of this concept. The ability to transform specific selections in an image proves invaluable for many creators, including yours truly. Lightricks’ recent LTX studio poses another example and it seems Music GenAI follows that path regarding audio. Such a shift is extremely influential regarding the implementation of AI-based tools into a professional workflow. It is also a key component in democratizing more and more segments of the creative process, but not without significant pitfalls for other professionals, such as musicians who earn their livelihood off licensed music, etc.

Do you see yourself using such a tool in your day-to-day work? Will it create new opportunities for you? Let us know in the comments.

]]>
https://www.cined.com/adobe-previews-music-genai-generative-ai-tool-for-audio-and-music/feed/ 0
Is OpenAI’s Sora Trained on YouTube Videos? A Question of Ethics and Licensing https://www.cined.com/is-openais-sora-trained-on-youtube-videos-a-question-of-ethics-and-licensing/ https://www.cined.com/is-openais-sora-trained-on-youtube-videos-a-question-of-ethics-and-licensing/#comments Mon, 18 Mar 2024 13:22:36 +0000 https://www.cined.com/?p=331027 You probably didn’t miss last month’s announcement of OpenAI’s video generator Sora. It created quite a buzz, raising both excitement and sorrow, as well as a lot of questions within the filmmaking community. One of the pressing matters that always comes up when talking about generative AI is what data developers are using for model training. In a recent interview with The Wall Street Journal, OpenAI’s chief technology officer (CTO) Mira Murati didn’t want (or wasn’t able) to provide the answer to this question. She added that she wasn’t sure whether Sora was trained on YouTube videos or not. This raises the important question: What does this mean in terms of ethics and licensing? Let’s take a critical look together!

In case you did miss it: Sora is OpenAI’s text-to-video generator, which is allegedly capable of creating consistent, realistically-looking, and detailed video clips up to 60 seconds, based on simple text descriptions. It hasn’t been released to the public yet, but the published showcases have already sparked a heavy discussion on the possible outcome. One of the assumptions is that it might entirely replace stock footage. Another is that video creators will have a hard time getting camera gigs.

While personally, I’m skeptical that AI can completely take over creative and cinematography jobs, there is another question that concerns me a lot more. If they used, say, YouTube videos for model training, how on earth would they be legally allowed to roll out Sora for commercial purposes? What would this mean in terms of licensing?

Was Sora trained on YouTube Videos?

Ahead of the interview, Joanna Stern from The Wall Street Journal provided OpenAI with a bunch of text prompts that were used to generate video clips. In the discussion with OpenAi’s CTO Mira Murati, they analyzed the results in terms of Sora’s strong sides and current limitations. What also became Joanna’s point of interest, is how severely some of the output reminded her of well-known cartoons or films.

Did the model see any clips of „Ferdinand“ to know what a bull in a China shop should look like? Was it a fan of „Spongebob“?

Joanna Stern, a quote from The Wall Street Journal interview with Mira Murati

However, when their interview touched on the dataset Sora learns from, Murati suddenly backed up and started beating around the bush. She didn’t want to dive into the details, was “not sure”, whether YouTube, Facebook, or Instagram videos were used in Sora’s model training, and leaned on the safe answer, that “it was publicly available or licensed data” (which are two very different things to begin with!). You don’t need to be a body language expert, to see that OpenAI’s CTO didn’t feel comfortable answering these questions. (You can watch her reaction in the original video interview below, starting from 04:05).

Copyright challenges concerning generative AI

According to WSJ, after the interview, Mira Murati confirmed that Sora used content from Shutterstock, which OpenAI has a partnership with. However, it’s guaranteed not the only source of footage that developers fed into their deep-learning models.

If we take a closer look at Murati’s response, the copyright and attribution situation becomes even more critical. The wording “publicly available data” may indeed mean, that OpenAI’s Sora scrapes the entire Internet, including YouTube publications, and content on social media. The licensing terms on YouTube content, for instance, most certainly don’t allow for this usage of all the content hosted there.

Maintaining copyrights online is a challenging area on its own. I’m not a lawyer, but some things are common sense. For instance, if Searchlight Pictures publishes a trailer for “Poor Things” on YouTube, it doesn’t mean that I’m freely allowed to use clips from it in my commercial work (or even in my blog, without correct attribution). At the same time, OpenAI’s Sora will get access to it, and be able to use it for learning purposes, but also to profit from it, just like that.

How some companies react

The copyright (and licensing) problem with generative AI is not new. Over the past year, we’ve heard about an increasing number of lawsuits that big media companies like “The New York Times” and “Getty Images” filed against AI developers (particularly often, against OpenAI).

If you have ever used text-to-image generators, you’ve surely seen, how artificial intelligence adds weird-looking words to the created pictures. More often than not they distinctly remind of a stock image watermark or a company name, which signifies these AI companies don’t have rights for all the datasets they use.

Was OpenAI's Sora trained on YouTube Videos? - how image generators sometimes include random texts
An “abstract background” image, suddenly including a random text. Image source: generated with Midjourney for CineD

Unfortunately, there are no strict regulations in place yet, that would prevent AI developers from using materials online, and finding out and proving that a particular piece of data was used for training the model is close to impossible. Apart from issuing lawsuits, some companies have blocked OpenAI’s web crawler, so that it won’t be able to continue taking content from their websites, while others sign licensing agreements (one of the latest examples – Le Monde and Prisa Media, which will bring French and Spanish content to ChatGPT). But what do you do as an individual artist or video creator? This question stays open.

Not revealing datasets is a common issue for generative AI

It’s not just OpenAI’s CTO, who doesn’t want to talk about the datasets for Sora’s learning. The company generally hardly mentions the sources they use. Even in Sora’s technical paper, you can only find a vague note, that “training text-to-video generation systems requires a large amount of videos with corresponding text captions.”

The same problematic issue applies to other AI developers, especially to the ones, that call themselves “small”, “independent”, and/or “research”. For example, if you take a look at the website of famous image generator Midjourney and try to find information about the data on how they train their models, you are out of luck. Lack of transparency in this question can be the first sign these companies are trying to avoid legal problems due to the fact that they don’t have rights for the data they are using.

There are exceptions, of course. Thus, Adobe, launching their generative model Firefly, directly addressed the ethical question and published the information to the used datasets.

Was OpenAI's Sora trained on YouTube Videos? - a screenshot from Adobe's website about the dataset Firefly is trained on
Image source: Adobe Firefly’s webpage

However, their approach is still questionable. Were Adobe Stock contributors notified, that their footage would become the training field for AI? Did they give their consent? Does this fact increase their earnings? I doubt it.

What it means if Sora was trained on YouTube videos

So, as you can see, we have landed in a very messy situation with no clear solutions in sigh. During the same interview with the Wall Street Journal, Mira Murati mentioned, that Sora would be released to the public already this year. According to her, OpenAI aims to make the tool available at similar costs to their image generator DALL-E 3 (currently around $0.080 per image). However, if they don’t find a way to clarify their training data, or compensate filmmakers and video creators, things might get very tense for them. We predict that at least the big studios, production companies, and successful YouTube channels will bury OpenAI in copyright lawsuits if they don’t solve this by themselves, which might be hard to do.

And what do you think? How would you react, if OpenAI directly confirmed that they used YouTube videos and all published content, regardless of whom it belongs to? Is there any way, they can make things right, before they roll out Sora?

Feature image source: a screenshot from the video clip, generated by OpenAI’s Sora.

]]>
https://www.cined.com/is-openais-sora-trained-on-youtube-videos-a-question-of-ethics-and-licensing/feed/ 20