Software | CineD https://www.cined.com/reviews/reviews-software/ Thu, 21 Mar 2024 14:21:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 PRODUCER Software Evaluated in Real World Testing by MXR Productions https://www.cined.com/producer-software-evaluated-in-real-world-testing-by-mxr-productions/ https://www.cined.com/producer-software-evaluated-in-real-world-testing-by-mxr-productions/#comments Fri, 15 Mar 2024 09:59:53 +0000 https://www.cined.com/?p=330256 Some weeks ago, we had the opportunity to interview Xaver Walser, CEO of PRODUCER – Maker Machina. PRODUCER is an all-in-one production software to manage projects from the initial steps to delivery. Filmmaker Christoph Tilley and his production company MXR Productions used the app in a real-world scenario to give us their feedback. Well, here goes!

PRODUCER – Maker Machina is a promising all-in-one tool designed mainly for line producers, plus a collaborative tool for different projects, from short commercials to music videos. This software aims to end the nightmare of having all the production data separated into other apps by offering a comprehensive interface where everyone can be on the same page, saving time and making production more efficient. 

Christoph Tilley’s first impressions

In our first video, Xaver showed us how PRODUCER – Maker Machina works, giving a step-by-step explanation of its different features and the things under development. Changing the production paradigm in an industry where everything has evolved quickly, except in this area, could make this software a reference for filmmaking teams. 

Thanks to features like automating repetitive tasks, connecting the different parts of a shoot in the same program, and making communication easier, PRODUCER – Maker Machina offers a blueprint of the entire production without having to use external apps, send emails, make redundant phone calls, etc. 

Right now, there are more than 5,000 filmmakers testing the app, and filmmaker Christoph Tilley gave us his first impressions in the video above while using the software in a commercial shoot out of the office. 

All the stages of the production are organized inside the program. – Source: PRODUCER – Maker Machina.

He likes that the program gives you all the production information in one place, speeding up the process in an industry where speed and efficiency are gold. As an independent production company, Christoph finds PRODUCER – Maker Machina liberating, which gives more space to be creative on set and focus on making movies.

The things he would love to see in the future are a sophisticated budgeting tool and a time-tracking tool to avoid switching to external apps like Notion. Finally, Christoph recommends that other filmmakers test the tool and see if it fits their workflow. His team found it helpful and a time-saving tool, which, again, is essential in this industry. Even if just saves an hour per year, PRODUCER is worth it for him.

Price and availability 

You can sign up for PRODUCER in seconds here. The Free plan has no time limit, giving you the best chance to fully explore the tool and unlock its potential. When you sign up before 31st of March, you’ll also be able to take advantage of a limited 50% off deal that is available for Public Beta users only. To claim the offer, follow the instructions here.

What do you think about PRODUCER – Maker Machina? Would you be interested in testing the program? Which features would you like to see in the future? Please let us know your thoughts in the comments below!

]]>
https://www.cined.com/producer-software-evaluated-in-real-world-testing-by-mxr-productions/feed/ 8
PRODUCER – Maker Machina Tested – First Look at the All-In-One Production Software https://www.cined.com/producer-maker-machina-tested-first-look-at-the-all-in-one-production-software/ https://www.cined.com/producer-maker-machina-tested-first-look-at-the-all-in-one-production-software/#comments Fri, 02 Feb 2024 13:01:50 +0000 https://www.cined.com/?p=324925 Let’s be honest: production pipelines can be enormously frustrating at times – especially if you shoot commercials, have tons of client projects, and collaborate with different filmmakers. Ever seen a desperate 1st AD calling each member of the 20-people crew to reschedule the shoot? Or found yourself lost in countless Google documents with shooting plans, Notion boards, and Asana task lists? These are exactly the issues that the new production software PRODUCER – Maker Machina aims to solve. Together with seasoned filmmaker Christoph Tilley from MXR Productions, we decided to take it into the field and give the creators (and you) our honest feedback on what this software can and cannot do.

This is the first of our video interview series in which Nino Leitner (CineD) sits together with Christoph Tilley (MXR Productions) and Xaver Walser (CEO of PRODUCER – Maker Machina). Initially, Xaver guides everyone through the online software step by step and presents all the features that are currently available. Filmmakers also discuss the biggest pain points in current project production, and why it’s so important to change the existing paradigm and streamline all the processes.

What is the main goal of PRODUCER – Maker Machina?

The idea for an all-in-one production software came to Xaver during a commercial shoot for a watch brand. The client asked him if they could have all the created content, shooting schedules, and feedback notes in one place. Regrettably, the filmmaker had to acknowledge that he had never come across such a comprehensive tool. That was the first step toward founding a start-up with precisely this goal: to give creatives an all-in-one application for managing productions from early concept through to delivery. Or, as Xaver Walser nicely puts it: “To make a painkiller for filmmakers.”

Inside the PRODUCER – Maker Machina

According to Xaver, the new software is called “PRODUCER” because the person who will likely use it the most is a line producer. At the same time, it is developed as a collaborative tool adjustable for projects of varying scales, ranging from a 30-second commercial to music videos, image and corporate films. Feature films are planned to be integrated at a later development stage.

When you select a project from a visual board, as demonstrated above, you will see the entire production process divided into stages we all are familiar with:

Image source: PRODUCER – Maker Machina

This overview helps to make sure that no step will be overlooked. Simultaneously, the software keeps everything related to this particular project centralized. In the video interview, Xaver explains in detail how centralization works. Let’s look at a couple of existing features.

Automating repetitive tasks in PRODUCER – Maker Machina

What PRODUCER – Maker Machina promises to be good at is automating processes and simplifying repetitive tasks. For example, here we have a storyboard section in which you can drag and drop your scribbles, generated pictures, or references from the Internet:

Image source: PRODUCER – Maker Machina

Apart from moving them around until you get the correct storyline, you can connect each of your pictures to a shooting day, a location (from the list you created before), and characters. Also, it’s easy to collaborate with a cinematographer, adding additional information such as angle, movement, shot size, camera lens, etc.

Image source: PRODUCER – Maker Machina

While you’re going through this process, the software automatically creates a shot list for each day based on the information you’ve provided. Simply make the final adjustments by dragging shots into the desired sequence (e.g., for consecutive shots), include the estimated time, and let the program do the math. Double-check, connect the actors from the list to the character roles, add your crew members for this project – and, wait, what? Has PRODUCER – Maker Machina just generated a correct call sheet?

At first glance, the tool does seem easy to use, fast, and flexible. During the presentation, Christoph Tilley remembered how they had just finished a shooting schedule in a clumsy Google doc, and watching the new software made him jealous. Well, I can only relate.

For easy communication

Extensive communication is another big pain point in our industry that PRODUCER – Maker Machina wants to resolve. The software allows users to add comments to each document and at every production stage. Your clients can also collaborate whenever needed. For example, in the post-production section, it is possible to upload your first cut and share it for quick feedback. Additionally, you can compare different versions of the edit side by side, directly in the software.

Image source: PRODUCER – Maker Machina

Of course, there are enough tools out there that offer us the same in terms of editing and delivery. For instance, a lot of filmmakers use Frame.io to gather feedback. Yet, how many times have your clients lost the link to a rough cut? If they could have everything just in one online tool, wouldn’t it be easier for everyone?

What else to expect?

Of course, an image is worth a thousand words, and our written text can only capture a limited number of features. So, make sure to watch our video above to form your own impression of PRODUCER.

It’s worth mentioning that it’s young software and only a starting point for this tool. At the moment, they have around 4000 people testing the application and providing them with feedback. Xaver Walser says they take all input seriously and have a big roadmap ahead. For example, developers want to add an extensive and structured briefing document or offer the possibility to upload dailies beside each day’s call sheet, just to give you an idea of the upcoming features.

Price & availability

You can sign up for PRODUCER in seconds here. The Free plan has no time limit, giving you the best chance to fully explore the tool and unlock its potential. When you sign up before 31st of March, you’ll also be able to take advantage of a limited 50% off deal that is available for Public Beta users only. To claim the offer, follow the instructions here.

Stay tuned for our upcoming videos!

For our video series, Christopher Tilley will take PRODUCER on an upcoming commercial shoot, test it thoroughly, and come back with honest feedback on what worked and what can be improved. So, stay tuned, and don’t miss our follow-up in a couple of weeks!

What do you think of PRODUCER – Maker Machina? How did you feel about the video presentation? Is it something that you were also looking for production-wise? Are there any features that could be added to the software, in your opinion? Let’s talk in the comments below!

Feature image source: PRODUCER Maker Machina

]]>
https://www.cined.com/producer-maker-machina-tested-first-look-at-the-all-in-one-production-software/feed/ 8
AI Video Generators Tested – Why They Won’t Replace Us Anytime Soon https://www.cined.com/ai-video-generators-tested-why-they-wont-replace-us-soon/ https://www.cined.com/ai-video-generators-tested-why-they-wont-replace-us-soon/#comments Thu, 25 Jan 2024 11:22:36 +0000 https://www.cined.com/?p=323174 The rapid development of generative AI will either excite you or make you a bit uneasy. Either way, there is no point in ignoring it because humanity has already reached the point of no return. The technical advancements are here and will undoubtedly affect our industry, to say the least. As filmmakers and writers, we take it upon ourselves to responsibly inform you as to what the actual state of technology is, and how to approach it most ethically and sustainably. With that in mind, we’ve put together an overview of AI video generators to highlight their current capabilities and limitations.

If you’ve been following this topic on our site for a longer time, you might remember our first piece about Google’s baby steps toward generating moving images from text descriptions. Around a year ago, the company published their promising research papers and examples of the first tests. However, Google’s models were not yet available to the general public. Fast forward to now, and not only has this idea become a reality, but we have a plethora of working AI video generators to choose from.

Well, “working” is probably too strong a word. Let’s give them a try, and talk about how and when it is okay to use them.

AI video generators: market leaders

The first company to roll out an intelligent AI model capable of generating and digitally stylizing videos based on text commands was Runway. Since spring 2023, they have launched tool after tool for enhancing clips (like AI upscales, artificial slow motion, removing the background in one click, etc.), which made a lot of VFX processes simpler for the independent creators out there. However, we will review only their flagship product – a deep-learning network, Gen-2, that can conjure videos upon your request (or at least it tries to).

While Runway indeed still runs the show in video generation, they now have a couple of established competitors. The most well-known one is Pika.

Pika is an idea-to-video platform that utilizes AI. There’s a lot of technical stuff involved, but basically, if you can type it, Pika can turn it into a video.

A description from their website

As the creators of Pika emphasize, their tech team developed and trained their own video model from scratch, and you won’t find it elsewhere on the market. However, they don’t disclose what kind of data it was trained on (and we will get to this question below). Until recently, Pika worked only through the Discord server as a beta test and was completely free of charge. You can still try it out this way (just click on the Discord link above), or head over to their freshly launched upgraded model Pika 1.0 in the web interface.

Both of these companies offer a free basic plan for their products. Runway allows only limited generations to test their platform. In the case of Pika, you get 30 credits (equals 3 short videos), which refill every day. Also, the generated clips have a baseline length (4 seconds for Runway’s Gen-2, 3 seconds for Pika’s AI), that can be extended a few times. The default resolution differs from 768 × 448 (Gen-2) to 1280 x 720 (Pika). However, you can upscale your results either directly in each software (there are paid plans for it), or by using other external AI tools like TopazLabs.

What about open-source projects?

This past autumn, another big name in the image generation space entered the video terrain. Stability AI launched Stable Video Diffusion (SVD) – their first model that can create videos out of still images. Like their other projects, it is open source so you can download the code on GitHub, run the model locally, and read everything about its technical capabilities in the official research paper. If you want to take a look at it without struggling with AI code, here’s a free online community demo on their HuggingFace space.

For now, SVD consists of two image-to-video models that are capable of generating videos at 14 and 25 frames at customizable frame rates between 3 and 30 frames per second. As the creators claim, external user preference studies showed that Stable Video Diffusion surpasses the models from the competitors:

Image source: Stability AI

Well, we’ll see if that evaluation stands the test. At the moment, we can only compare it to the other image-to-video generative tools. Stability AI also plans to roll out a text-to-video model soon, and anyone can sign up for the waitlist here.

Generating videos from text – side-by-side comparison

So, let’s get the experiments going, shall we? Here’s my text prompt: “A woman stands by the window and looks at the evening snow falling outside”. The first result comes from Pika’s free beta model, created directly in their Discord channel:

Not so bad for an early research launch, right? The woman cannot be described as realistic, and for some reason, the snow falls everywhere, but I like the overall atmosphere and the lights outside. Let’s compare it to the newer model of Pika. The same text description with a different video result:

Okay, what happened here? This woman with her creepy plastic face terrifies me, to be honest. Also, where did the initial window go? Now, she just stands outside in the snow, and that’s definitely not the generation I asked for. Somehow, I like the previous result better, although it’s from the already obsolete model. We’ll give it another chance later, but now it’s Gen-2’s turn:

Although Gen-2 also didn’t manage to keep the falling snow solely outside the window, we can see how much more cinematic the output feels here. It’s the overall quality of the image, the light cast on the woman’s hair, the depth-of-field, the focus… Of course, this clip is far from spotless, and you would immediately recognize that it was generated by AI. But the difference is huge, and the models will continue learning for sure.

AI models are learning fast, but they also struggle

After running several tests, I can say that video generators struggle a lot. More often than not they produce sloppy results, especially if you want to get some lifelike motion within the frame. In the previous comparison, we established that Runway’s AI generates videos with higher-quality imagery. Well, maybe they just have a better still image generator because I couldn’t get a video of a running fox out of this bugger, no matter how many times I tried:

Surprisingly, Pika’s new AI model came up with a more decent result. Yes, I know the framing is horrible, and the fox looks as if it ran out of a cheap cartoon, but at least it moves its legs!

By the way, this is a good example to demonstrate how fast AI models learn. Compare the video above (by Pika 1.0) to the one below that I created with the help of the previous Pika model (in Discord). The text input was the same, but the difference in the generated content – drastic:

Animating images with AI video generators

A slightly better application idea for current video generators, in my opinion, is to let them create or animate landscape shots or abstract images. For instance, here is a picture of random-sized golden particles (sparks of light, magic, or dust – it doesn’t matter) on a black background that Midjourney V6 generated:

Image source: generated with Midjourney V6 for CineD

Each of the AI video generators mentioned in the first part of this review allows uploading a still image and animating it. Some don’t require any additional text input and go ahead on their own. For example, here’s what Runway’s Gen-2 came up with:

What do you think? It might function well as a background filler for credits text, but I find the motion lacks diversity. After playing around, I got a much better result with a special feature called “Motion Brush”. This tool, integrated as a beta test into the AI model, allows users to mark a particular area of their still image and define the exact motion.

Pika’s browser model insisted on the additional text description with the uploaded image, so the output didn’t come out as expected:

Regardless of the spontaneous explosions at the end, I don’t like the art of motion and the camera shake. In my vision, the golden particles should float around consistently. Let’s give it another go and try the community demo of Stable Video Diffusion:

Now we’re talking! Of course, this example has only 6fps and the AI model obviously cannot separate the particles from the background, but the overall motion is much closer to what I envisioned. Possibly, after extensive training followed by some more trial and error, SVD will show a satisfactory video result.

Consistency issues and other limitations

Well, after looking at these examples, it’s safe to say that AI video generators haven’t yet reached the point where they can take over our jobs as cinematographers or 2D/3D animators. The frame-to-frame consistency is not there, the results often have a lot of weird artifacts, and the motion of the characters (be it human or animal) does not feel even remotely realistic.

Also, at the moment, the general process requires way too much effort to get a decent generated video that’s close to your initial vision. It seems easier to take a camera and get the shot that you want “the ordinary way”.

At the same time, it is not like AI is going to invent its own ideas or carefully work on framing that is the best one for the story. Nor is that something non-filmmakers will be constantly aware of while generating videos. So, I reckon that applying visual storytelling tools and crafting beautiful evolving cinematography shall remain in our human hands.

There are also some other limitations that you should be aware of. For example, Stable Video Diffusion doesn’t allow using their models for commercial purposes. You will face the same issue with Runway and Pika on a free-of-charge basis. At the same time, once you get a paid subscription, Pika will remove their watermark and grant commercial rights.

However, I advise against putting generated videos into ads and films for now. Why? Because there is a huge ethical question behind the use of this generative AI that needs regulatory and attribution solutions first. Nobody knows what data they were trained on. Most possibly, the database consists of anything to be found online, so a lot of pictures, photos, and other works of artists who haven’t given their permission nor have gotten any attribution. One of the companies that try to handle this issue differently is Adobe with their AI model Firefly. They also announced video AI tools last spring, but it’s still in the making.

In what way can we use them to our advantage?

Some people say that AI-generated content will soon replace stock footage. I doubt it, to be honest, but we’ll see. In my opinion, the best way to use generative AI tools is during preproduction, for instance, to quickly communicate your vision. While text-to-image models are a handy go-to for gathering inspiration and creating artistic mood boards, video generators could become a quick solution for making previsualization. If you, like me, normally use your own poor scribbles that you’ve put together one after the other to create a story reel, then, well – video generators will be a huge upgrade. They don’t produce perfect results, as we’ve seen above, but that’s more than enough to draft your story and previs it in moving pictures.

Another idea that comes to mind is animating your still images for YouTube channels or presentations. Nowadays, creators tend to add digital zoom-ins or fake pans to make their photos appear more dynamic. With a little help from AI video generators, they will have more exciting options to choose from.

Conclusion

The creators of text-to-image AI Midjourney also announced, that they are working on a video generator and are planning to launch it in a few months. And there most certainly will be more to come this year. So, we can either look the other way and try to ignore it, or we can embrace this advancement and work together on finding ethical applications. Additionally, it’s crucial to educate people that there will soon be an increase in fake content, and they shouldn’t believe everything they see on the Internet.

What are your thoughts on AI video generators? Any ideas on how to make this technical development a useful tool for filmmaking (instead of only calling AI an enemy that will destroy our industry?) I know that this topic provokes heavy discussions in the comments, so I ask you: please, be kind to each other. Let’s make it a constructive exchange of thoughts instead of a fight! Thank you!

Feature image: screenshots from videos, generated by Runway and SVD

]]>
https://www.cined.com/ai-video-generators-tested-why-they-wont-replace-us-soon/feed/ 11
Improved Naturalism and Better People Generation in Adobe Firefly Image 2 – A Review of New Features https://www.cined.com/improved-naturalism-and-better-people-generation-in-adobe-firefly-image-2-a-review-of-new-features/ https://www.cined.com/improved-naturalism-and-better-people-generation-in-adobe-firefly-image-2-a-review-of-new-features/#comments Tue, 28 Nov 2023 16:46:23 +0000 https://www.cined.com/?p=316280 Time flies, and artificial intelligence models keep getting better. We witnessed it in spring when Midjourney rolled out its updated Version 5, which blew our minds with its unbelievable photorealistic images. As predicted, Adobe didn’t lag behind. Apart from announcing several upcoming AI tools for filmmakers, the company also launched a new model for their text-to-image generator. In Adobe Firefly Image 2, developers promise better people generation, improved dynamic range, and some new features like negative prompting. After testing it over a period of time, we’re eager to share our results and thoughts with you.

The new, deep-learning model Adobe Firefly Image 2 is now in beta and available for testing. In fact, if you tried out its predecessor, you can use the same link. It will open the updated image generator by default. If not, you can sign up here.

To get a hang of how Firefly works, read our detailed review of the previous AI model. In this article, we will skip the basics and concentrate on what’s new and what has changed specifically in Adobe Firefly Image 2.

Photographic quality and generation capabilities

So, the main question is can the updated Firefly finally create realistic-looking people? As you probably remember, the previous model struggled with photorealism even when you specifically chose “photo” as your preferred content type. For example, this was the closest I got to natural looking faces last time:

The Adobe Firefly (Beta) results with the prompt “a girl’s face, covered with beautiful flowers”, using the previous AI model. Image source: created with Adobe Firefly for CineD

Why don’t we take the same text prompt and try it out in the latest Firefly version? I must mention here if you don’t specify whether you want AI to generate “photo” or “art” in the parameters, the artificial brain will automatically choose what seems most logical. That’s why the first results with my old prompt were illustrations:

Same prompt, different results. Image source: created with Adobe Firefly Image 2 for CineD

Looks nice and creative, right? However, not what we were trying for. So, let’s try again. Here is a raster with four pictures Adobe Firefly Image 2 came up with after I changed the content type to “photo”:

Image source: created with Adobe Firefly Image 2 for CineD

Wow, that’s definitely an improvement! The announcement from developers stated that the new Firefly model “supports better photographic quality with high-frequency details like skin pores and foliage.” These portrait results definitely prove them right.

The flip side of the coin

However, perfection is a myth. While the previous model couldn’t make a picture look like a photo, this one doesn’t seem to have “an imagination”. If you compare the results above, you will see that Adobe Firefly Image 2 put little to no flowers directly on the faces. Apparently, it would feel too unreal. Yet, it was the main idea behind the image I had visualized, so the old neural network was more to the point.

If you want to also create something rather dreamy, try playing with your text prompt and settings. For example, I added the word “fantasy” and changed the style to “otherworldly”. Those iterations brought me a slightly better match for my original concept:

Image source: created with Adobe Firefly Image 2 for CineD

What bothers me more though is a sudden occurrence of the bias problem. Do you notice that almost all these women (even the illustrated ones) have European facial features, green or blue eyes, and long blonde or light-brown hair? Where did diversity go? The first image generator by Adobe constantly generated all kinds of appearances, races, skin colors, etc. This one, on the contrary, sticks to one category.

Not to mention that different experiments with new settings and features delivered different results, and not all of them worked smoothly. For instance, here is an Adobe Firefly Image 2’s attempt to picture children playing with a kite on the beach at sunset:

Image source: created with Adobe Firefly Image 2 for CineD

Photorealistic as hell, I know.

Photo settings in Adobe Firefly Image 2

The new photo settings feature sounded very appealing in the press release, especially for content creators and filmmakers who use image generators for, say, making mood boards. It includes changing the key photo parameters we all are familiar with – aperture, shutter speed, and field of view. The last one refers to a lens, which you can now specify by moving a little tumbler. It is also the only setting that somehow worked in my experiments. Here you see a comparison of results with a 300mm vs 50mm camera lens:

At least there’s a subtle change, right? However, I can’t confirm the same for the aperture setting. Even when the description promised “less lens blur”, the results provided a low depth of field regardless.

Less lens blur? Don’t think so. Image source: created with Adobe Firefly Image 2 for CineD

So, the idea of manually controlling the camera settings of the image output sounds fantastic, but we’re not there yet. When this feature starts running like clockwork, this will no doubt be reason enough to switch to Adobe Firefly, even from my favorite, Midjourney.

Using image references to match a style

Another new feature Adobe introduced is called “Generative Match”. It allows users to upload a specific reference (or choose from a preselected list) and transfer the style of it onto the generated pictures. You will find it on the sidebar along with other settings:

A screenshot from Adobe Firefly Image 2 interface. Image source: Mascha Deikova/CineD

My idea was to create a fantasy knight in a sci-fi style using the gorgeous lighting and color palette from “Blade Runner 2049”. The first part of this task went quite well:

Image source: created with Adobe Firefly Image 2 for CineD

However, when I tried to upload the Denis Villeneuve film still, Firefly warned me that:

To use this service, you must have the rights to use any third-party images, and your upload history will be stored as thumbnails.

Sounds great, especially because a lot of people forget to attribute the initial artists whose pictures they use for reference. So, I changed my plan and used a film still from my own sci-fi short instead. Below you see my reference and how Firefly processed it, matching the style of the knight image results to its look and feel:

Not bad! Adobe Firefly Image 2 replicated the colors, and you can even see the grain from my original film still. Also, AI unexpectedly got rid of the helmet to show the face of my knight. So, it tries to match the style as well as the content of your reference.

Inpainting directly in your results with Adobe Firefly Image 2

Let’s say I like the new colors but don’t want to see the face of the knight from my previous example. Would it be possible to fix it with Adobe’s Generative Fill? Sure, why not, as the AI upgrade allows us to apply the Inpaint function directly to generated images without leaving the browser:

Where to find newly added functions. Image source: Mascha Deikova/CineD

Generative Fill is a convenient tool you can use with a simple brush (just like in Photoshop Beta) to mask out an area of the image you don’t like. Afterward, either insert the new elements with a text prompt or click “remove” to let AI come up with content-aware-fill.

In the process of inpainting. Image source: created with Adobe Firefly Image 2 for CineD

To achieve a better result, I marked a slightly bigger area than I needed. (In the first attempts, the size of the helmet was too small in proportion). Several runs later, Firefly generated a couple of decent results, so this experiment was successful:

Image source: created with Adobe Firefly Image 2 for CineD

You can now also alter your own images in the browser without downloading Photoshop (Beta). You can test the Generative Fill magic yourself here. I played with the removal feature and made a very realistic-looking flames visualization for an upcoming SFX shot using a real photo from our location.

Negative prompting

At this point, it only made sense to include the addition of negative prompting. As with other image generators, now you can add up to 100 words (English only) that you want Adobe Firefly Image 2 to avoid. For that, click “Advanced Settings” on your right and type in specific, no-go terms using the return key after each word.

Developers recommend using this feature to exclude such glitches and sudden appearances like “text”, “extra limbs”, “blurry”, etc. I tried it with another concrete example. To start with, I created an illustrated picture of a cat catching fireflies in the moonlight.

Image source: created with Adobe Firefly Image 2 for CineD

The results are very lovely, but naturally, the artificial intelligence put the image of the moon in each and every picture. My idea, on the other hand, was only to recreate the soft bluish lighting. That’s why I tried to get rid of the Earth’s natural satellite by adding “moon” in the negative prompting field.

And here are the results. Image source: created with Adobe Firefly Image 2 for CineD

Okay, it only worked in one out of four results, and unfortunately, not in the most appealing one. Still, better than nothing. Hopefully, this feature works better with undesired artifacts like extra fingers or gore.

Attribution novice

When you decide to save your results, Adobe Firefly Image 2 warns you that it will apply Content Credentials to let other people know your picture was generated with AI. In case you missed it, Adobe even created their own symbol for such purposes.

I was happy to hear that because firstly, this symbol doesn’t resemble a big red watermark “not for commercial use”, which the previous AI model stamped on each picture. Secondly, it is also a big step towards distinguishing real content from created content. Finally, Adobe’s tool even promises to indicate in the credentials when the generated result uses a reference image.

The only problem is: where is it? Scroll through my article again. At this point, you will find at least 5 images generated by Firefly and downloaded in their full size (feature one, for example). Do you see any content credentials? What about a small “Cr” button? Neither do I. So, why do they announce it whenever you try to save a picture? Is it a bug or am I so special?

Price and availability

To get full access to all of Adobe’s AI products, you just need any Creative Cloud subscription. The type of subscription determines the number of image generations you can perform. Free users with an Adobe account but no paid software receive 25 credits to test out the AI features, with each credit representing a specific action, such as text-to-image generation. You can read more about the different pricing models here.

Adobe Firefly Image 2 is in web-based beta, but the developers promise to include it in Creative Cloud apps soon.

Have you tried the upgraded model yet? What do you think about it? Which added functions work well and which don’t, in your opinion? Let’s exchange best practices in the comment section below!

]]>
https://www.cined.com/improved-naturalism-and-better-people-generation-in-adobe-firefly-image-2-a-review-of-new-features/feed/ 3
Movavi Video Suite 2024 Available – A Closer Look At a Complete Editing Solution https://www.cined.com/movavi-video-suite-2024-available-a-closer-look-at-a-complete-editing-solution/ https://www.cined.com/movavi-video-suite-2024-available-a-closer-look-at-a-complete-editing-solution/#comments Mon, 23 Oct 2023 15:14:11 +0000 https://www.cined.com/?p=308845 MOVAVI Video Suite is a simple solution that includes all the tools and assets a content creator could need in today’s social media world. This easy-to-use app is available for Mac and Windows, and it could be an excellent entry-level editor for those who want to start creating videos. So, let’s take a look and see what Movavi Video Suite can do!

In a well-established market where three leading video editors dominate the industry (Premiere Pro, Final Cut Pro X, and DaVinci Resolve), software companies are launching new products adapted to our times and the latest content creators’ needs. The new Movavi Video Suite fits that category.

An alternative for non-editors

Times have changed, and the visual formats we used to know are only a fraction of what’s produced and shared today. With smartphones as primary creating tools and social media platforms as the main distribution channels, the language of filmmaking has changed. The length, type of content, music, effects, assets used, etc., are only a few of the elements that have to work well for a video to be shared and, therefore, successful.

In this context, anyone can make a video now. You don’t need to be an editor or a filmmaker to film, edit, and publish videos. Content creators, especially those unrelated to the filmmaking industry, need easy tools to do what they intend to do – film with their phone, edit, and upload their creation to share with others. They don’t need complex or expensive gear to publish decent content online. In other words, simpler is better.

This path leads us to the segment of video editing software where Movavi Video Suite fits in perfectly. Programs like DaVinci Resolve or Adobe Premiere can feel overwhelming for beginners or creators who aim for fast workflow and don’t need all the advanced tools these programs offer.

A simple interface

When opening the program, the first thing we notice is a simple and well-organized interface. Everything is there; we don’t need to open new windows and tabs to determine how the program works. To avoid confusion, each panel has texts like ‘Drag files here’ or ‘Drag folder here’ in the file import section or ‘Drop files here’ in the timeline. This hints at the program’s intended users.

Movavi Video Suite’s main window. All clear and organized – Source: MOVAVI

The timeline shows all the tools available without navigating the submenus. It’s all straightforward. We can add tracks, select, cut, add a marker, crop a clip, add transitions, etc., by clicking one of the familiar icons next to the timeline.

We can save our videos in the most popular formats in the Export window with a few clicks. However, the program also includes advanced controls to adjust our final video.

An all-in-one solution

We know as editors that one of the most disruptive moments in the creative process is when we have to stop, search somewhere else for music, assets, or stock footage, and then go back to editing. With Movavi Video Suite, this is no longer a problem because it includes libraries with music, sound effects, sample videos, intro videos, animations… everything we need to start and finish the editing process without ever leaving the program.

Tools like ‘Record Video’, ‘Capture Screencast’, or ‘Record Audio’ show Movavi’s commitment to ensuring a seamless creator experience from start to finish.

Movavi includes many effects and presets to polish our videos with a click in a ‘drag and drop’ system. Everything is organized by theme to facilitate our search. We also have essential tools like color adjustments, crop and rotate, pan and zoom, stabilization, chroma, background removal, tracker, scene detection, and speed effects. Moreover, we also have the option to go further and fine-tune things in Manual Mode. AI tools like motion tracking, background removal, or noise removal are available in the latest version.

Inside the app, we can find music, effects, intros, and more – Source: MOVAVI

The included stickers, callouts, and frames also fit nicely in the social media video creator world.

Finally, the Effects Store offers different packs, including effects, music, backgrounds, stickers, etc. We can preview and access them inside the app before purchasing.

Users will find funny effects for their creations – Source: MOVAVI

Who is MOVAVI Video Suite for?

When I opened Movavi Video Suite for the first time, I intended to make it work without reading a manual or going to Google for help. I was able to edit and export a complete video, using different tools and applying effects with no problem at all.

Movavi gives the user a quick workflow with all its tools, assets, and effects visible. Of course, we will not find the same capabilities as those in professional NLEs, but it is a complete system for beginners and users looking for an all-in-one solution to create their videos. In that sense, I see it competing with similar video apps like Splice or Apple’s iMovie.

All the assets can be edited and tweaked simply – Credit: Jose Prada/CineD

Price and availability

A free trial version of the video editing software for mac can be downloaded here.

The MOVAVI Video Suite can be found here. (Currently 20% off until October 29th)

The full version’s annual subscription costs €67,95.

They also have a 55% discount promotion until October 22 for these packs: Video Suite + Photo Editor – €77,95 (annual subscription) and 95,95 € (lifetime subscription)

So, what do you think about this alternative to the more established NLEs? Would you give them a chance to create content that needs a quick workflow? Let us know in the comments below!

]]>
https://www.cined.com/movavi-video-suite-2024-available-a-closer-look-at-a-complete-editing-solution/feed/ 2
Midjourney’s Vary Region Feature Challenges Adobe’s Generative Fill – Review https://www.cined.com/midjourney-vary-region-feature-challenges-adobes-generative-fill-review/ https://www.cined.com/midjourney-vary-region-feature-challenges-adobes-generative-fill-review/#comments Thu, 31 Aug 2023 14:37:50 +0000 https://www.cined.com/?p=302688 A long-awaited news for all AI art lovers! Midjourney has recently rolled out the new inpainting function, which basically allows users to alter selected parts of their image. It is still in the testing phase, but the results are already quite impressive. Some call the update “an answer to Adobe’s Generative Fill”. Others react with the excited, “Finally!” We also tried out the Midjourney’s Vary Region feature and think it has the potential to support us in different filmmaking tasks. How so? Let’s explore together!

Midjourney is considered one of the best image generators on the market. As the developers belong to an independent research lab, they manage to release new updates and features at breakneck speed. (Just a couple of weeks ago, we were experimenting with the latest Zoom Out function, for example). Users also appreciate the precise language understanding and incredible photorealistic results of this deep-learning model.

Yet, one of the central things Midjourney lacked was the possibility to change selected areas of your image. Compared to Stable Diffusion, which had an Inpaint function from the beginning on, or Adobe’s Generative Fill, Midjourney users couldn’t adjust the details of their generated visuals. That was frustrating, but finally, this issue won’t be a problem anymore. Well, at least to some extent.

Before we dive into the tests, tips, and tricks for Midjourney’s Vary Region Feature, a heads-up. If you have never used this AI image generator before, please read our article “Creating Artistic Mood Boards for Videos Using AI Tools” first. There you will learn the basics of working with Midjourney’s neural network.

Two ways to use Midjourney’s Vary Region feature

Users can access the new feature through Midjourney’s Discord Bot, as usual. After you generate and upscale an image, the button “Vary (Region)” will appear underneath it.

Midjourney's Vary Region feature - where the button is
Location of the new button in the Discord interface. Image credit: Mascha Deikova/CineD

When you click on the button, a new window with an editor will pop up directly from your chat. There, you can choose between a rectangular selection tool or the freehand lasso. Use one or both to select the area of your image that you want to refine.

Midjourney's Vary Region feature - selecting the area
Image credit: Mascha Deikova/CineD

Now you have two possibilities. The first one is to click “submit” and let Midjourney regenerate the defined part of the visual. In this case, it will try to correct mistakes within this area and get a better result according to your original text input. In my example, the neural network created new visualizations of the medieval warrior and matched it to the background.

Midjourney's Vary Region feature - the results, that Vary Region brings
Regenerating the subject of your image. Image credit: created with Midjourney for CineD

An alternative approach to making use of Midjourney’s new Vary Region feature involves implementing the Remix mode. This way, you can change the contents of the selected area completely by writing a new prompt. You might need to enable it by typing “/settings“ and clicking on “remix mode”.

Changing the text prompt to refine your image

Once you’ve enabled the Remix mode, an additional text box will appear in the editor, which will allow you to modify the prompt for the selected region. Describe precisely what you want to see in that area. Be specific about the details you’d like to introduce or exclude (a few tips on wording follow below). Don’t worry, the AI will preserve the original aspect ratio of the root image.

Midjourney's Vary Region feature - selecting a bigger sized area
Changing your text prompt in the remix mode. Image credit: Mascha Deikova/CineD

As you see in the screenshot above, I decided to change the entire environment around my warrior, teleporting him from a foggy forest into an abandoned village. The results were unexpectedly good. 3 out of 4 image variations matched my description precisely and didn’t contain any weird artifacts. Check it out yourself:

Midjourney's Vary Region feature - changing the environment around the character
Changing the environment around the character. Image credit: created with Midjourney for CineD

Of course, a single successful test does not set a precedent, and my other experiments turned out less encouraging. However, for the tool, which came out only recently and is still in the beta test, the results seem amazing.

What’s especially great about the new Midjourney’s Vary Region feature is that it introduces flexibility. By upscaling regenerated images in between, you can improve parts of your image as many times as you need to get the desired result. Let’s say you have a specific shot in mind and you want to convey it to your cinematographer, producer, or production designer. Now, it seems possible to really get it from your head onto the paper without any drawing skills. While it may involve some trial and error, the potential is there.

Tips for the best image result

As with other neural networks, Midjourney is still learning. So, don’t expect wonders from it straight away. In order to get the best result out of the Vary Region feature, here are some tips you may follow (which combine suggestions from the AI developers and myself):

  • This feature works best on large regions of the image. According to creators, if you select 20% to 50% of the picture’s area, you will get the most precise and consistent results.
  • In cases where you decide to alter the prompt, the neural network will provide a better outcome if your new text matches that specific image. For example, Midjourney has no problems adding a hat to a character. Yet, if you ask it to draw an extremely unusual scenario (like an elephant in the room – pardon the pun!), the system might not give you the result you intended.
  • The function also respects some of the commands and parameters working in Midjourney. So, don’t forget about the power of the “—no” command, a parameter utilized for negative prompts. This prompts the AI to eliminate the specified elements from the image.

Possible ways to use Midjourney’s Vary Region feature in filmmaking

As you probably know, I love using Midjourney for visual research, creating artistic mood boards, and preparing pitch papers for upcoming projects. The latest update will definitely simplify this conceptual task and be useful in situations when I want to communicate my specific vision. As they say, a picture is worth a thousand words.

Apart from that, you might use Midjourney’s Vary Region function to create a fast previz. The system remembers your initial selection when you return to the image after altering specific parts of it. This allows you to utilize the tool multiple times for generating diverse scenarios. Accordingly, I was able to put my warrior into different scenarios and then animate them in the form of match-cuts for a short preview of his hero’s journey. It didn’t take much time, and the video result speaks for itself:

I’m not suggesting that it will always suit the purpose, but for some scenes or sequences such a previz is enough.

Comparing inpainting in Midjourney to Adobe’s Generative Fill

What Midjourney’s Vary Region definitely lacks at this point in time is the possibility to upload your own image (or film still) and then refine parts of it with the help of artificial intelligence. This would allow us to prepare filmed scenes for VFX, or even quickly mask out some disturbing elements in the shot.

Sounds cool, right? This is already within the capabilities of Adobe’s Generative Fill. Based on their own AI called Adobe Firefly, this function is available in Photoshop (Beta). You can install the software and try it out if you have a Creative Cloud subscription. In the following example, I took a film still from my latest short and changed a part of the image, just like that. Now, the protagonist can enjoy a slightly more appealing dinner:

Generative Fill can also work as an eraser for the designated area. If you don’t type anything in the prompt, it will make an effort to eliminate the chosen elements by employing content-aware fill techniques. Midjourney, on the other hand, always tries to put something new into the defined area.

So no, in my opinion, Midjourney is by no means the new Generative Fill. However, it’s developing in this direction, and hopefully, similar functions will be introduced soon. Why “hopefully”? The quality of the pictures created by this artistic image generator is still hard to beat, even with Adobe’s innovations.

Other problems and limitations

We already touched on the issue concerning the selected area’s size. In one of the tests, I tried to replace only the warrior in the wide shot. Changing the prompt accordingly, I hoped to get a beautiful elven woman in a long green dress, and the results were not promising.

Midjourney's Vary Region feature - weird results
Weird results are also part of the process. Image credit: created with Midjourney for CineD

The only decent picture I managed to generate after a couple of trials was the last one, where the woman stands with her back to the viewers. Others seem not only quite disturbing but also weird. All that, despite the fact that usually, Midjourney can draw absolutely stunning humans and human-like creatures. If we ask it to make variations of the troubled elf from the pictures above, using the same prompt, it instantly comes up with an amazing outcome:

Midjourney's Vary Region feature - how Midjourney can generate humans
Image credit: created with Midjourney for CineD

So, hopefully, in the next updates, the model will continue to learn and eventually apply its new skills to the smaller selected parts of the image as well.

Some other limitations and problems I noticed while playing with Midjourney’s Vary Region feature are:

  • The interface of the built-in editor is bulky and awkward to use. Compared to the variety of flexible tools in Photoshop, Midjourney’s lasso will take some getting used to. Yet, it’s remarkable that developers could embed a functioning editor directly into Discord.
  • Additionally, there is no eraser. As a result, no fast changes can be made to your selection. At the moment, you can only “undo” the steps in which you marked various areas of the image.
  • Midjourney’s Vary Region tool is compatible with the following model versions: V5.0, V5.1, V5.2, and Niji 5.
  • Midjourney users in the Discord community also noticed that when you do many generations of regional variations, the whole image gradually becomes darker and darker.

Conclusion

In every article on AI tools, I mention the question of ethics, and I won’t stop doing so. Of course, we don’t want artificial intelligence to take over our jobs, or big production companies to use generated art without proper attribution to the initial artists whose works the models were trained on. Yet, such tools as Midjourney can also become helping hands, which support us in mundane tasks, or help enhance our work and unleash new ideas for film, art, and music. A careful and ethical approach is key here. Therefore, it’s important to learn how to use neural networks and keep up with the updates.

So, what do you think about Midjourney’s Vary Region feature? Have you already tried it out? What are some other ways of using it in your film and video projects?

Feature image credit: created with Midjourney for CineD

]]>
https://www.cined.com/midjourney-vary-region-feature-challenges-adobes-generative-fill-review/feed/ 2
FUJIFILM XApp Review – Finally A Good Camera Companion App? https://www.cined.com/fujifilm-xapp-review-finally-a-good-camera-companion-app/ https://www.cined.com/fujifilm-xapp-review-finally-a-good-camera-companion-app/#comments Mon, 26 Jun 2023 12:01:20 +0000 https://www.cined.com/?p=293666 At the end of May 2023, together with the latest X-S20 mirrorless camera (see our review), FUJIFILM released a brand new companion smartphone app called XApp. This new app lets you control some camera functions and transfer media. But it also makes use of some features that make their mirrorless cameras more useful as day-to-day companions. Let’s dive in!

With a lousy 1.3-star rating on the iOS App Store, the old FUJIFILM Camera Remote app had a very bad reputation. The camera’s unreliable connections, outdated user interface, and lack of support for modern functions and formats were clear signs that an update was desperately needed.

And FUJIFILM gave us a worthy one with the brand-new XApp. This app is not an update to the old Camera Remote App, but rather a new listing in the App Store the lets us start with a clean slate.

New user interface

The XApp features a minimalistic design with monochromatic color use. The user interface looks very clean and is easy to understand and use.

XApp on iPhone showing that the FUJIFILM X-S20 is connected
The new user interface is a big upgrade compared to the old Camera Connect App. Image credit: CineD

The main features are laid out clearly as soon as you start the app. You’ll be prompted to grant a bunch of permissions to access your photo library (required for transferring images from your camera to your phone), location (for example for geotagging), and so on.

FUJIFILM XApp interface on an iPhone and on an iPad
The user interface is optimized for smartphones and tablets. Image credit: CineD

The XApp also scales well on larger devices, like iPads and other tablets. Culling through numerous photos and selecting them for import is particularly enjoyable, especially on tablets.

Camera connection

Connecting to a FUJIFILM camera couldn’t be easier. Be sure to update your camera to the latest firmware to make it compatible with the new XApp. Getting the camera ready to connect to a smart device was greatly simplified with the latest firmware updates.

Image transfer

As soon as your phone is connected to your camera, you can select the prominent “Image Acquisition / Photography” button to copy content from your camera to your phone.

You get previews of all the content that is saved on your memory card. You only see thumbnails when transferring photos in the JPEG or HEIF format though, since the App doesn’t support RAW photos or video files. All RAW photos and videos will show a generic thumbnail without a preview and cannot be transferred.

When you want to work with your RAW photos or video files, I would strongly advise using a cable or card reader to transfer the files from the camera to your tablet or computer. Even if it were possible with the XApp, it would take a long time to transfer those large files over WiFi.

Preview and select photos from the memory card on your phone for transferring.
You get image previews of your JPEG and HEIF photos, but no RAW photo or video support. Image credit: CineD

To transfer images, the smartphone needs to connect to the camera via WiFi. The app only asks you to join the camera’s WiFi network and the rest is done for you.

a progress indicator tell you how much time is left to transfer all selected images
The progress screen while transferring images. Image credit: CineD

HEIF support

For image sharing, the JPEG and new HEIF images with the famous Film Simulations are great. You get high-resolution previews of the images on your phone or tablet, and you can also pinch-to-zoom on your device for checking details and focus.

After selecting the images that you’d like to transfer, you can select whether to transfer the full-size photos or resize the images. This option will save space on your phone, but you might also consider using the HEIF format. This gives you the same quality but at lower file sizes compared to JPEGs, and you can use the full-resolution HEIF, which ends up being the same file size as a downscaled JPEG. iOS and macOS have been compatible with HEIF photos since 2017 starting with iOS 11 and macOS High Sierra. You can also work with those files on Windows with the help of extensions.

Remote control

Strangely, Remote Control and Camera Control are two separate features that are found in different locations in the XApp.

The Remote Control feature is a simple virtual shutter button that takes a picture (with a shutter hold option) or starts/stops a video.

the camera remote link leads to the shutter button screen while the Photography button leads to the camera remote interface
Remote Control and Camera Remote are two separate user interfaces. Image credit: CineD

When you want to control the camera settings and see a live preview, you press the prominent “Image Acquisition / Photography” button and then switch to the Camera tab on the top. Inside the Camera Control interface, you can switch between Photo and Video mode for different sets of settings.

adjustment options for Aperture, Exposure Compensation, ISO, Film Simulation and White Balance in Photo mode
Adjustment options for Aperture, Exposure Compensation, ISO, Film Simulation, and White Balance in Photo mode. Image credit: CineD

In Photo mode, you get a preview image with touch-to-focus functionality, which works accurately but is relatively slow to react on your touch input. There is also basic status information visible around the preview image and you can adjust the aperture, exposure compensation, ISO, film simulation, and white balance.

adjustment options for Shutter Speed, Aperture, Film Simulation and White Balance in Videomode
Adjustment options for Shutter Speed, Aperture, Film Simulation, and White Balance in Video mode. Image credit: CineD

In Video mode, you only get the option to adjust shutter speed, aperture, film simulation, and white balance. Strangely, you cannot adjust ISO in video mode. More settings would be nice to have in a future update.

Camera settings Backup/Restore

One convenient feature of the XApp is the Settings Backup/Restore function. If you use multiple camera bodies or rent your camera, you can simply save your camera settings and restore them before you get going again.

All camera backups are shown and can be selected for restoring to the same model
Backup setting from the connected camera and restore from backups. Image credit: CineD

Unfortunately, you can only restore settings to the same camera model (from X-H2 to X-H2 for example). I understand that different camera models have different feature sets, but I wish I could at least transfer the functions to a different model that both cameras support.

Timeline & activity

Something unique about the XApp is the Timeline and Activity features. These features let you see your activity with your FUJIFILM gear.

the timeline displays tiles with images of your cameras, lenses, and photos taken
Timeline view with the camera and lens used, as well as photo occasions. Image credit: CineD

The Timeline shows you a chronological view of all the times you used your camera and lenses. The app also compiles events with all the images taken on a certain day or in a specific location. You can also go into photo events and look at more details like a map with pins where all the images were taken.

looking at timeline details
Looking at the timeline details. Image credit: CineD

The Activity feature is a statistical summary of all the metadata in your images. You get to see the total amount of images you took, total video recording time, and how many images you transferred to your phone.

Your activity records are synchronized from your camera to the XApp.
Synchronizing the activity from your phone to the XApp. Image credit: CineD

There is also a breakdown of the cameras and lenses you used and how many pictures were taken with any film simulation. The same goes for videos (that are stored on your memory card).

breakdown of all cameras, lenses, and more metadata
Breakdown of all the metadata from all the captured images in the Activity tab. Image credit: CineD

You are required to create an account using “Continue with Apple/Google/Facebook” in order to use the Activity feature. Maybe at some point, this will turn into a “social network” of some kind where FUJIFILM users have their own public user profiles where they can choose to share some of this information outside of the XApp.

I don’t see any professional use for these features, but they are a very nice touch for personal enjoyment. These features are definitely geared toward enthusiasts.

Geotagging

Geotagging (adding location information to) photos with the help of the smartphone app finally works (reliably) for the first time with any FUJIFILM app. The camera will show a geolocation icon on the screen, which will blink red if no location has been transmitted lately from the phone. Just open the app on your phone, let it connect for a few seconds and the current location will be captured in the next photo.

the interval for the location data update can be adjusted from 10 sec to 480 sec
Fine adjustment of the location synchronization interval. Image credit: CineD

In my experience, this worked very well and I only had to open the app to force a location update to the camera a few times. Unfortunately, the location data is only saved for JPEG/HEIF photos but not for RAW photos (in a sidecar file.) The location data is recorded to both JPEG/HEIF and RAW photos. Just be aware that more frequent location updates will use more of your phone’s battery but I never had to stop using the app because I felt it was draining my (iPhone 13 mini) battery too quickly. You can customize the location synchronization interval in the app settings.

What’s missing

I would really love to see an Intervalometer option for Timelapses in the XApp. A nice user interface for setting up a Timelapse and executing it from your smartphone would be very convenient.

Also, a way to adjust more settings in photo and video mode would be welcome if you really want to rig the camera in a hard-to-reach space and would like to control the whole camera remotely.

Let me know in the comments what features you would like to see added to the XApp!

Conclusion

For users who want extended functionality for their FUJIFILM camera for everyday use, the XApp is a very welcome introduction and very good at what it does. I really love how closely the HEIF files come to native iPhone photos when it comes to metadata. The imported mirrorless photos are also included in iCloud’s photo memories thanks to geotagging and in-phone face and animal recognition.

Professional users have to rely on third-party solutions like frame.io and other Camera-to-Cloud providers to wirelessly and safely transfer RAW photos and high-resolution videos.

I hope that this app will only become even more useful over time if FUJIFILM decides to keep up the Kaizen spirit (continuous improvement of software over time) with the XApp as well.

The FUJIFILM XApp is available for iOS (App Store Link) and Android (Google Play Link).

I tested version 1.0.2. of the XApp on an iPhone 12 mini running iOS 16.5.

More information about the FUJIFILM XApp can be found on the FUJIFILM Website.

Do you use a smartphone app as a companion to your camera? If so, do you use it only for fun or also for professional use? What are your experiences with the FUJIFILM Camera Remote or XApp? Let me know what you think in the comments below! I’d love to hear from you!

]]>
https://www.cined.com/fujifilm-xapp-review-finally-a-good-camera-companion-app/feed/ 11
Adobe Firefly (Beta) Review – Pros and Cons of Their New Generative AI https://www.cined.com/adobe-firefly-beta-review-pros-and-cons-of-their-new-generative-ai/ https://www.cined.com/adobe-firefly-beta-review-pros-and-cons-of-their-new-generative-ai/#comments Mon, 08 May 2023 14:30:05 +0000 https://www.cined.com/?p=289002 The buzz surrounding artificial intelligence doesn’t seem to fade. On the contrary, following quick start-ups, major media software companies are now also venturing into this emerging field. Perhaps the strongest example of this is Adobe, which announced their own generative AI a couple of months ago. How they plan to integrate new tools into their software is crazy ambitious, to say the least. We joined the beta and tested Adobe Firefly to give you a thorough overview of its current capabilities. Spoiler alert: it doesn’t match the precision of Midjourney yet, but it will definitely reach that point.

At NAB 2023, Adobe also revealed their plans to expand Firefly’s features for quick and easy video production. This will enable users to change the color scheme of a shot by giving AI text commands or to generate a fully sketched storyboard directly from a script. Click “Make previz” and go get some coffee, while the neural network strains its deep-learning brain to create a full 3D visualization for you. What a time to be alive, right?

All jokes aside, we will have to wait and see how Adobe’s plans come to fruition. For now, we can only evaluate what already exists and is available for a test run, such as Firefly’s text-to-image generator, its extra features for creative text effects, and the recoloring of vectors. Let’s dive right in!

What is Adobe Firefly and how to test it?

Firefly is Adobe’s new area of research, which focuses completely on AI-driven tools and generative models. The developers started with image and text effect generation, but they won’t stop there. Their essential goal is to come up with all possible ways to speed up and improve creative workflows, and then integrate them into Adobe’s existing products like Photoshop, Premiere Pro, or InDesign.

Firefly is the natural extension of the technology Adobe has produced over the past 40 years, driven by the belief that people should be empowered to bring their ideas into the world precisely as they imagine them.

Quote from Adobe Firefly’s website

And that’s where the community enters the game. By participating in the beta, Adobe encourages users not only to help them develop the existing models but also to suggest new helpful features (more detailed information follows below). To become a Firefly tester, you can simply click “Join the beta” here and wait for the invitation, which may take a couple of weeks.

we tested adobe firefly - some works in the gallery
A screenshot from the Firefly web gallery. Image source: Adobe

Once you have it, you can use Adobe Firefly directly in your browser (including Chrome, Safari, and Edge on the desktop). Bear in mind that the AI beta currently doesn’t support tablets or any mobile devices.

Interface and usability of Adobe Firefly

If you are already familiar with other famous text-to-image generators like Midjourney or Stable Diffusion (we reviewed them here), the interface of Adobe Firefly will blow you away. At least, this was my first impression. Not only does it look very user-friendly, but it also has a number of creative parameters that are very easy to play with. Need another aspect ratio? Press a button. Wanna change your content type from photo to graphic? Press a button. Going for a pastel color palette? You get it, press a button.

we tested adobe firefly - showing interface
Adobe Firefly’s interface in Chrome. Credit: images created with Firefly by CineD

Essentially, when you change those styles, the program only adds words to or emits them from your text prompt below. But compared to Midjourney, where you have to chat with a chatbot to get a better result, this surface feels like a breakthrough solution. What’s curious is that while you play with different parameters, you can ask the model to apply them to pictures you’ve already generated, or you can click “Refresh” and get a completely different set of images. In the collage below, I combined three slightly varied style tests of the same girl and got different results.

we tested adobe firefly - trying out different styles
Trying out different style preferences. Credit: images created with Firefly by CineD

However, after several tests, I have to admit: the parameters don’t always work as they promise. It’s not magic yet, so you won’t instantly get the same images as “shot from above” by changing your composition properties. Neither can Firefly convert your wide-lens picture into macro photography within seconds. One piece of advice I can offer: choose all your style settings before clicking “generate” – it’s the best way to get closer to the result you want.

Another helpful trick you may overlook: by hovering your mouse over one of the images, the button “show similar” will pop up in its upper left corner. Click it and you will get different variations. Adobe also regularly shares useful tips in their live streams, like this one for example.

Adobe Firefly tested: photorealism is not its strength yet

Let’s take another look at the generated face of a girl covered with flowers, which I posted as the first test. What do you notice? First, wow! I like how Firefly offers completely different nationalities and races within one image set. The developers often stress that they are training the AI model to be non-biased, and here we can clearly see their success.

At the same time, I wouldn’t mix up these results with the actual photos (although I chose the content type “photo” for this image generation). They just don’t seem real to me, and I’m not the only one. In the comments to Adobe live-streams, other users also notice that Firefly lags behind in terms of photorealism in generating people. Maybe we are just too spoiled by Midjourney, which has developed by leaps and bounds, and already carries the title of a ”photorealistic wonder” within the AI community. To corroborate my initial impression, I tried the same text prompt in both applications, and, well, see for yourself:

we tested adobe firefly - a side by side comparison with Midjourney
Same prompt, different AI. On the left: image created by Adobe Firefly. On the right: Midjourney’s work. Credit: created for CineD

Here we see for sure that Adobe Firefly visibly struggles with creating realistic limbs – the problem all deep-learning models have to face in the development phase. The good news is: the creators know and acknowledge it. They say that while beta testers can’t avoid coming across weird artifacts in some scenarios, the neural network will keep learning and will gradually become better. Also, when you ask it to generate an interior or a landscape, you already get much smoother results.

we tested adobe firefly - landscape
A lavender field in the sunset. Image credit: created with Firefly by CineD
we tested adobe firefly - generated interior
Trying to get a realistic interior. Image credit: created with Firefly by CineD

Fast and simple text effects

The next feature we tested is a text effect generator. The tool works similarly: just type in a prompt explaining the style, add the text you want to alter, and enjoy. Here we introduce you to a high-tech steampunk variant of the CineD logo:

we tested adobe firefly - complicated text effects
Image credit: created with Firefly by CineD

The software also lets you adjust some parameters, such as changing text color (doesn’t really work), changing the background color (works, but has issues with transparency and text outlines, which developers promise will improve over time), or deciding how far your effect should stretch. One of the tips found on Firefly’s Discord suggests adding [outline-strength=10 (variable 10-100)] to the text prompt to create a bit of extra chaos and push the effect elements further outside its bounds.

After some attempts, I realized you can get much better results if you describe only two or three simple characteristics, like texture+color. Nothing more, nothing less. Such an approach creates minimalistic, yet easy-to-recognize effects.

we tested adobe firefly - other text effects
Different text effects in more minimalistic styles. Image credit: created with Firefly by CineD

Recoloring vectors in Adobe Firefly

This is a fresh tool, which wasn’t available even a week ago. To be fair, I don’t really work with vectors, so I’m not sure how difficult it is to recolor them manually. Still, let’s give artificial intelligence a try.

we tested adobe firefly - recoloring vector
Image credit: created with Firefly by CineD

In this example, I took a simple icon of a film slate from my Instagram and asked Firefly to apply a neon color palette to it. The results meet my expectations and are also available to download as SVG files. Not sure if a designer would take it as a final variant to a client, but that’s hopefully not the point of using generative AI.

What data Firefly’s AI is trained on

This is my favorite question, for sure, because it’s part of a huge discussion on ethics in the artificial intelligence field. Compared to other developers, Adobe decided to go a separate and at the same time the most legal way. They claim to train Firefly’s models exclusively using Adobe Stock images, openly licensed footage, and public domain content where copyright has expired.

This allows them to curate deep-learning mechanisms against harmful or biased content, and also to respect artists’ ownership and intellectual property rights. It also means: in the future, we will be free to use Firefly-generated content in commercial projects.

we tested adobe firefly - fantasy art
Your fantasy is the only limit. Or is it? Image credit: created with Firefly by CineD

Limitations

  • Adobe Firefly doesn’t currently support the upload or export of video content;
  • You cannot train the model on your own footage, as is possible in Stable Diffusion, for example (which is a pro and a con at the same time);
  • While still in beta, you can only use Firefly for non-commercial purposes (and be prepared for a visible watermark on your generated images, like the one you probably noticed in my examples above);
  • At the moment, Adobe’s generative AI only supports English prompts;
  • This release doesn’t allow saving your created works to Creative Cloud (this feature should be enabled in the future);
  • You won’t be able to create humorous images of famous people or brand mock-ups in Firefly, as it only uses photos of public figures that are available for commercial use on the Stock website (excluding editorial content).

Other features coming up this year

So, yes, generating images using Adobe Firefly is still far from perfect, but it’s so exciting to observe how its AI continues to evolve. The developers tease different features, which will be available for beta tests this year. Among them, for example, are image extensions, smart portraits, or an in-painting function.

we tested adobe firefly - upcoming functions like inpainting
A demonstration of the upcoming inpainting function. Image credit: Adobe

And if you have an exciting idea on how to use this technology in Adobe applications, you can join the Firefly Discord server and talk directly to the engineers (if you’re a member of the Adobe Community forum, you can also do it there). They are open to feedback and encourage the testers to take part in future exploration of AI.

Conclusion

As the purpose of the beta phase is to help Adobe create cool new tools, which will speed up our workflow someday, they ask Firefly users to report all the bugs and evaluate the generated results. You can do it directly in the interface: there are thumbs-up/down buttons on every image and also a Report tool for providing feedback. I will for sure get back to them with my detailed review.

What do you think about this new AI? Have you already tested Adobe Firefly? How did you like it? What new tools do you expect to see in future releases? Let’s talk in the comments below!

Feature image: created with Adobe Firefly by CineD.

]]>
https://www.cined.com/adobe-firefly-beta-review-pros-and-cons-of-their-new-generative-ai/feed/ 7
Filmstro Web App Demonstration with New Features – Make Your Own Soundtracks https://www.cined.com/filmstro-web-app-demonstration-with-new-features-make-your-own-soundtracks/ https://www.cined.com/filmstro-web-app-demonstration-with-new-features-make-your-own-soundtracks/#respond Wed, 15 Mar 2023 13:51:05 +0000 https://www.cined.com/?p=279301 Filmstro is a music library with a twist – all tracks and music are adjustable and can be customized within a video, rather than editing the video to the music. Now, the Filmstro Web App got a brand new update allowing filmmakers to integrate video directly into the Web App to then choose and customize their favorite royalty-free music tracks online. This new version is already available as a lifetime subscription for only $189. Let’s take a closer look.

Filmstro was originally launched in 2016 and allows filmmakers to create music tracks that are designed to fit their films. If you are not familiar with the service, you can watch a video demonstration by my colleague Nino here and read more about Filmstro here.

Choosing the music

There are different ways of searching for music in the Web App, as all tracks are categorized by mood, video genre, instrumental palette, new or featured music, game genre and so on.

Filmstro
Source: Filmstro

Including your video

When adding your chosen video track to the Filmstro Web App, you are not uploading it, but the clip is integrated directly in your browser. Make sure to use MP4 file format for your film, as no PreRes or any other codec is supported. Altogether this is a very easy-to-use and simple workflow.

Once your video is integrated, you can add music to your video and start customizing it to your taste. You can also mix the original video soundtrack with the music you are looking for, to get a better idea of how it would complement the edit.

Filmstro
Source: Filmstro

Customizing your music track

When you’ve found a suitable music track for your video, you can customize the music in several different ways. For example, you can not only adjust the length, the in-, and out-shape, but also the momentum, the power, and the depth of your music track by only dragging 3 simple sliders.

Filmstro
Source: Filmstro

Additionally, you are also able to switch certain sounds on single tracks of your music on and off. For example, if you have a multiple-instrument track, you could only filter out the strings, or the drums. Unfortunately, the Filmstro Web App only provides track numbers instead of clearly labeled tracks. To find out which instrument is assigned to which track you have to listen to every single one.

Filmstro
Source: Filmstro

Conclusions

The Filmstro Web App is a very practical tool that gives you a lot of flexibility by modifying existing music tracks for your film. It almost comes close to the possibilities you have when you are working with a real music composer. Changing an arrangement, changing momentum, changing the way and the intensity the instruments are performing, and changing the power and the size of the ensemble can come in extremely handy when you are looking for the right music for your project.

Of course, as Filmstro is a music library, it is important to have as large a selection of music tracks as possible. It would also be also nice to get an NLE plug-in for Filmstro and to be able to work with keyframes.

Price and availability

The new FilmstroPro Web App is now available on the company’s website. But one of the best things about Filmstro is their lifetime subscription! Indeed, with an investment of only $189, you gain access to their services and an ever-growing library.

Have you been using Filmstro already? How do you like the new features? Please let us know your thoughts in the comment section below!

]]>
https://www.cined.com/filmstro-web-app-demonstration-with-new-features-make-your-own-soundtracks/feed/ 0
Filmstro WebApp Review – Adaptive Soundtrack Creation Now Browser-Based https://www.cined.com/filmstro-webapp-review-adaptive-soundtrack-creation-now-browser-based/ https://www.cined.com/filmstro-webapp-review-adaptive-soundtrack-creation-now-browser-based/#comments Mon, 28 Mar 2022 12:29:58 +0000 https://www.cined.com/?p=229838 FilmstroPro V4 is a browser-based version of the company’s adaptive soundtrack creation software. The new Filmstro WebApp now allows filmmakers and content creators to pick up and customize their favorite royalty-free music tracks online without the need for a desktop app. Let’s take a closer look.

First launched in 2016, Filmstro brought a breath of fresh air into the world of royalty-free music. The intuitive approach of their desktop app allows filmmakers to create music tracks that are tailor-made to their visuals and not vice versa. It’s literally as simple as dragging 3 sliders that act on the Momentum, Depth, and Power of the track. You can watch a video demonstration by my colleague Nino here.

Several improvements have made Filmstro a better service over the years and now, with the release of FilmstroPro V4, the company is ready to turn its standalone desktop app into a browser-based service.

FilmstroPro V4: now with Filmstro WebApp

FilmstroPro V4 is the fourth generation of the company’s audio adaptive technology. According to founder Sebastian Jaeger, the idea of a browser-based solution has always been there since the early days of Filmstro. However, technology was not as far along as it is now.

Image credit: Filmstro

Now, after an intense developing period of 9 months, FilmstroPro V4 is here. Or rather, it’s in the cloud and it will be accessible anywhere via the Filmstro WebApp. This new version inherits the same intuitive approach as the desktop app, but it doesn’t require powerful hardware to work.

Image credit: Filmstro

In fact, all you have to do is select your video within the Filmstro WebApp. Then you can browse through hundreds of available rights-cleared tracks, pick up the one that best suits your project, and make your desired changes using the 3 slider controls. Lastly, you can render out the final result and download it as a .wav file. Your project will be safely stored in your Filmstro web account in case you need to edit it again.

Redesigned with new features

Along with the introduction of the Filmstro WebApp, FilmstroPro V4 also brings new features to this already-capable music editing app. One of the most helpful novelties, especially for new users, is the addition of templates.

Image credit: Filmstro

After picking up a track, you can choose to make changes fully manually or apply one among the Hero, Action, or Suspense presets. Each of these templates automatically adds keyframes and adjusts the Momentum, Depth, and Power sliders differently to show you how to achieve various results.

Another time-saving tool is the new Audition feature which allows you to preview any change to the 3 slider controls without actually applying it to the track. When the Audition switch is turned on, the timeline is also greyed out to prevent accidental changes to the music.

Image credit: Filmstro

Last but not least, users can now apply changes to entire music blocks, thus making a whole portion of the track sound quieter or more powerful, according to your creative intention. When working on a “block”, the Snap feature can be enabled to turn each continuous slider into a 3-stage control. You can then apply transitions to each block and fine-tune them to create a smoother passage.

Image credit: Filmstro

Price and availability

The new FilmstroPro V4 with WebApp is now available on the company’s website. One of the coolest things about Filmstro is their lifetime subscription. Indeed, with an investment of only $189 you gain access to their service forever. This offer is currently available, but will most likely run out in the next couple of months. As far as we are concerned, it would make sense for Filmstro to turn into a monthly-paid service after that.

Finally, if you already have an existing lifetime subscription, you will receive access to FilmstroPro V4 and the Filmstro WebApp for free, while still retaining the ability to use your desktop app indefinitely.

Have you been using Filmstro already? How do like its features? What advantages will you take from this new browser-based version? Let us know your thoughts in the comment section below!

]]>
https://www.cined.com/filmstro-webapp-review-adaptive-soundtrack-creation-now-browser-based/feed/ 1