A downloadable tool for Windows

Download NowName your own price

Flowframes is a simple but powerful app that utilizes AI frameworks to interpolate videos in order to increase their framerate with little to no noticable quality loss.

The latest versions are exclusive to Patreon for a short time - The itch.io version might not be the latest!

Features:

  • Based on the brand new RIFE neural network for frame interpolation
  • Also supports AMD GPUs via NCNN/Vulkan
  • Easy to use, no installation, single executable
  • Compatible with MP4, GIF, WEBM, MKV and more
  • Output GIFs or videos (MP4/WEBM/etc) or PNG frames
  • Built-In Frame De-Duplication and speed fixing (for drawn animation, etc)
  • Scene Detection to avoid interpolating cuts
  • Bonus Tools: Loop Video or Change Video Speed
  • ...and more!

If you need any help or have questions, contact me on Discord: nmkd#1425


Recommended System Requirements:

  • Modern CPU like AMD Ryzen 1300X
  • Vulkan-capable Graphics Card (like GTX 1060 or AMD RX 5700) with 6 GB VRAM
  • SSD for best performance
  • Windows 10
Updated 1 day ago
StatusIn development
CategoryTool
PlatformsWindows
Rating
(10)
AuthorN00MKRAD
Tagsai, artificial-intelligence, video

Download

Download NowName your own price

Click download now to get access to the following files:

Flowframes 1.19 (Basic + Python Version)
Flowframes 1.18 (Basic + Python Version) [OLD]

Comments

Log in with itch.io to leave a comment.

Viewing most recent comments 1 to 40 of 45 · Next page · Last page

Hi I just have try your app, but for some reason I use RIFE CUDA, but when it start it say CUDA is not avilable , can you tell me how to fix this issue?, I have RTX 3070 btw

You have to install python and its dependencies to make it work.
The current python that is packed with the app doesn't support the 3000 series.
You could also join his patreon, as far as I know, the newest versions are packed with a compatible python

I would like to ask if there is a repair to the 2D animation transition damage screen in the 1.20.4 and 1.18.2 version and is there any improvement in the conversion speed?

Hmm. I can't seem to get any version to run. I've tested it on both computers I own (one Nvidia gpu and one AMD), but in both scenarios the gui freezes completely once I load in any video file. How should I troubleshoot this?

Did you install Python 3.8.6 (not later), CUDA and Vulkan ?

Also this

https://github.com/n00mkrad/flowframes/blob/main/PythonDependencies.md

(2 edits)

Thanks! It doesn't freeze anymore. I had the wrong version of Python. However, when I try to run

pip install torch===1.7.0+cu110 torchvision===0.8.1+cu110 -f https://download.pytorch.org/whl/torch_stable.html

It throws this error:

Looking in links: https://download.pytorch.org/whl/torch_stable.html
ERROR: Could not find a version that satisfies the requirement torch===1.7.0+cu110
ERROR: No matching distribution found for torch===1.7.0+cu110

The 'pip install opencv-python sk-video imageio' command worked fine.

Yeah, that tends to happen
Python is an ugly mess sometimes :/
Try installing the .whl manually

oh hi

hey

(3 edits)

I'm on 1.18.2

I ran dedup on a frame folder, it said 'difference from frame 00000001.png to 00000002.png: 1,97% - Deleting. Total: 0 kept / 1 deleted' then proceed to delete the whole folder with over 115000 frames.

[edit]

1 small update: I tried several different settings: tried changing the threshold to 1% (was 2% I think, the default value) - same; changed threshold to 4% - (obviously) same; tried the 'only analyse, don't delete' option - same (it deleted the whole folder yet again); copied a bunch of random frames to a folder (absolutely different from one another), renamed them to 00000001, 00000002, 00000003, 00000004, etc. (15 frames total) it said 'kept 15 frames, deleted 0 frames', but deleted the whole folder again.

(-1)

Don't use the manual deduplication util

(1 edit)

Thanks for your work on this amazing program! Using 18.2 with RIFE NCNN on a 5700XT (using Linux), I now get around 26FPS out on a 480p video! For a 1080p video, I get around 4.5FPS out!

Edit: On Windows 10, I get around 20% more performance.

Hey,why when i disable the "Delete duplicate frames" option,the result video looks the same as imput ? thanks

(-1)

Probably because it has dupliate frames

Thanks

Trying to use Discord for easier communication - "You do not have permission to send messages in this channel"

Flowframes 1.19.1 - unfortunately duplicated frames still there as before. 

(1 edit)

Samples?
Also, please use Discord for easier communication, if you don't mind: https://discord.com/invite/eJHD2NSJRe

(1 edit)

Samples already have been posted at: https://github.com/n00mkrad/flowframes/issues/8 4 days ago, before V.1.19.1

Those samples include original source and results, so you can easily reproduce the results by yourself.

Flowframes 1.19.1 after extracting frames , throws an error:

Enable scene detection as a workaround. If you don't want to actually use scene detection, set the threshold to 1.

Will be fixed in the next update.

Thank you!

Does RIFE need a lot of time to start interpolation? Using 1.18.2 on RTX3080, keep showing Running RIFE (inference_video.py) and cmd window shows only 'changing working dir...' and 'Added... to PATH'. No error but none pics show up in the interp folder. 

Embedded Python is not compatible with RTX 3000 cards.

https://github.com/n00mkrad/flowframes/blob/main/PythonDependencies.md

Yo guys I decided to try this out and am getting this error. Any help would be appreciated. 

Reduce tile size

(7 edits)

Mandatory to encode in mp4 (in order to properly handle correct scene changes) negates all the benefits of your application, increasing unnecessary total processing time (to output in non requested format) more than  twice.

Here the result of my very last test that looks even more ridiculous :

Selected video/directory: test.mp4
Video FPS: 23.976 - Total Number Of Frames: 10481
Input Resolution: 1920x1080
Extracting scene changes...
Detected 115 scene changes.
Extracting video frames from input video...
frame=10481 fps= 42 L time=00:07:15.22 speed=1.75x Generating timecodes... Done.
Using embedded Python runtime.
Running RIFE (inference_video.py)...
Interpolated 41920/41924 frames (100%) - Average Speed: 1.25 FPS In / 4.99 FPS Out - Time: 02:20:03 - ETA: 0ms
Done running RIFE - Interpolation took 02:20:03

So, the interpolation by itself took 02:20:03

Now -the funny part:

in order to get proper scene cuts frame insertion and get the best possible quality of video output, I HAVE to process  mp4 encoding with CRF0 (that I don't really need), because "some of the encoding options are exclusive to the MP4 container"

( your app using ffmpeg - what kind of  "encoding options are exclusive to the MP4 container" ffmpeg presumes?)

And that encoding (to the mp4 format that I do not need) took  THREE !!! times more than interpolation procedure by itself .

Encoding MP4 video with CRF 0...
encoded 41920 frames in 12243.78s (3.42 fps), 93524.81 kb/s, Avg QP:7.85
Total processing time: 05:46:10

Summarize:

Wasted more than 3hrs to output stream format that doesn't satisfy my needs, just in order to get stream with proper scene detection output.

Workaround: (probably a bit dirty, but so far no other choice)

1. Check "Don't Delete Temp Folder After Interpolation" in the "Settings" - > "General"

2. Adjust parameters and start interpolation

3. "Brutally" intercept "Flowframes" upon finishing interpolation and starting mp4 encoding.

4. Bring *.png's from "interp" folder in to your NLE with proper frame rate interpretation.

5. Replace distorted frames on scene cuts with corresponded frames from folder "scenes".

6. Output your video in your preferred format.

Sure this a bit crazy approach, however this is the only way I found , so far, in order to get things to work.

(+1)

Chill, it's still in development. If it doesn't satisfy your requirements, work with what there is now or wait until I've updated it.

You don't have to intercept anything, you can just use the step-by-step mode if you wanna use the output frames and encode yourself.

I'll look into fixing scene changes with the PNG sequence output mode, as well as adding more export formats.

Anything on your wishlist? I'd probably start with VP9 and ProRes.

That's would be great.

The major part of my wishlist for now is to mange proper frame de-duplication.

Whatever I did I still couldn't get successful result. Your help is greatly appreciated.

So, let's take a look at original video that I already mentioned before:

https://drive.google.com/file/d/1mdwIDA2t369Meml4MBDIaENJWxFD2K4N/view?usp=shari...

This video consists duplicated frames in the following pattern:

  • frame #4 - duplicate of the frame #3
  • frame #9 - duplicate of the frame #8
  • frame #14 - duplicate of the frame # 13
  • frame #19 - duplicate of the frame # 18
  • frame #24 - duplicate of the frame # 23
  • ... etc.

As you can notice this is a obvious pattern for telecined video(23.976 to 29.97), that can be easily re-constructed  back to 23.974 using standard pulldown removal procedure in the most NLE  (Avisynth "MultiDecimate" can be used as well).

Now, let's take a look at 2x  video. processed by "Flowframes" with "Remove Duplicated Frames After Extraction, Threshold 10%.

https://drive.google.com/file/d/1eSR2lww-OXBH4MtDPtn3o2Ki_aR01nrO/view?usp=shari...

Processed video, instead of replaces duplicated frames by interpolated ones, not only leave them untouched but even increases duplicated frames with some weird irregular pattern:

  • frames #5,6 - duplicates of the frame #4
  • frame #10 - duplicate of the frame #9
  • frames #17,18 - duplicates of the frame # 16
  • frame #27 - duplicate of the frame # 26
  • frame #29 -duplicate of the frame # 28
  • frame #37 -duplicate of the frame # 36
  • ... etc.

Beside of that the length of the processed video (4s 789ms) is different than the length of original (4s 71ms). Well, this is not big difference for the ~4 sec video, but can you imagine what would be the out of sync value for video, let's say 1 hr. (BTW this issue already has been mentioned by GearlessGoG as well).

For now this is the  major problem for me. 

Lower your threshold. 

10% kills a lot of similar frames, which is why your result looks so weird. I wouldn't get above 2% or 3% for clean camera footage, cartoon on the other hand needs higher values.

The video length problem should be fixed in the next update.

That's the reason I asked you to assist. I tried all listed thresholds. In the very best scenario 1,2,3% , it produces the the same video as input with the same duplicated frames in the same positions.

1. With active "Scene detection" mode, output Image Sequence just moves interpolated frames out of temp folder without replacing distorted interpolated "cuts" by images from temp "scene" folder as it does during output to mp4.

2. Any chance to add more output format (Uncompressed AVI/MOV, QT ProRes, ...), available from ffmpeg that you already using?

1) Yeah, that's a little oversight. Should be possible to fix that.

2) Not sure, some of the encoding options are exclusive to the MP4 container. If you want a lossless format, you can use h264 with CRF0 for now. Will think about it though, not sure how much work it would be to add more formats.

Any update regarding  possible solution of frame duplication issue?

Can't reproduce your problem.

https://drive.google.com/file/d/19W4DwA5yWcwW6404Evfi9Vxig8HvsBO2/view

This video works fine with accurate deduplication set to 5%.

(It needs a higher threshold because it has some very strange morphing going on)

I have one question because I feel I might be missing something. If I enable the 'remove duplicated frarmes', won't I have a final video with different timing, thus rendering it very difficult to resync sound? I tried it once and the number of frames was (obviously) lower than 'x times the original source' so when reencoding the final video was totally out of sync. Did I do something wrong? Should then the frames get different timings? Or is it 'by design'?

Also, do you have changenotes somewhere?


Thank you

Changelog is in Discord (linked in Flowframes) and in the Flowframes start screen.

De-Duplication shouldn't change the video length unless you disabled dynamic timing in the settings.

lol my bad I didn't realize the changenotes were always up to date.

I think for some reason it had disabled that function. For now I will keep the duplicate frames because I'm running some time demanding projects. I will look for the 'dynamic timing' thing after.

Thanks!

(1 edit)

Hi guys, im having some troubles...i'm trying to interpolate a 24 fps (step:2) '.avi' video with 1920x1080 res and im keep getting this error (i have an RTX 2060 6gb). Any way around this?



That means your card can't use the selected algorythm at the intended resolution because of VRAM. DAIN uses A LOT of VRAM (they recommend at least 10gb for interpolating 720p, you can only safely interpolate 1080p using it with workstation-grade cards, such as QUADRO RTX 6000, 8000, A6000). PyTorch version of RIFE also needs lots of VRAM. Try RIFE-NCNN or CAIN.

Thanks man!! this really helped me

Go to Settings -> AI Specific Settings -> RIFE Fast Parallel Mode -> Disabled

It was already disabled but thanks anyway

(+1)

Incredible tool, thank you!


Got an issue with a mp4 file. When initiating interpolation, I receive an error message saying "input video file is not valid". I guess my file format is wrong; what are the requirements I need to fulfil? Running v17 rc4 as well as v16.

(+2)

There's a bug in all versions before 1.18.1 where the program won't accept files with an uppercase extension. 

So, if you file is called video.MP4, rename it to video.mp4. Then it should work.

That fixed it. Thanks!

Running V.1.18.2. Duplicated frames still don't get removed/interpolated, regardless specified "Threshold" value.

https://drive.google.com/file/d/19W4DwA5yWcwW6404Evfi9Vxig8HvsBO2/view?usp=shari...

Need permission for that link, please make it public.

(1 edit)

It's public now, sorry.

Link for QT Prores:

https://drive.google.com/file/d/1bWAiO7TvTCAjDrVs05djPvmZlZfRTdaE/view?usp=shari...

Here another sample where de-duplication fails:

https://drive.google.com/file/d/1mdwIDA2t369Meml4MBDIaENJWxFD2K4N/view?usp=shari...

ProRes will work in the next update.

Not sure what kind of problem you have with the second video. This is my result: https://icedrive.net/0/4aoydQdcT2

(1 edit)

If you go frame by frame in your result you can see "tons" of duplicated frames that have not been removed and interpolated. Another issue that your sample has duration different than original (original =4s 71ms, your sample=8s 76ms).

It's not suppose to be like that. Both of them should have the same duration.

We are talking about different fps, not duration.

Even in assumption that duration will be doubled that doesn't work either 4s 71ms *2 = 8s 0142ms not 8s 76ms. Original video consists 122 frames. Doubling that should give us 244frames, but your sample consists 484 frames ???

Two questions:

1. Any prevision as to when will V17 be available to free users? Scene detection is a highly anticipated feature.

2. Could it be possible to assign x3, x5, x6, x7 (ie not only binary numbers)? 

Thanks!

(+1)

1) Soon, within a few days

2) No, that's not possible, at least not at the moment.

(3 edits)

Very impressed of your app.

There are several issues I'd like to point your attention and may be get your support:

  1. Setting output mode to "Image Sequence still output mp4. How can I output image sequence (lossless)?
  2. On "cut" scene change it produces distorted transition.
  3. Duplicated frames don't get removed/interpolated, regardless specified "Option", staying in the same places .
  4. "The latest versions are exclusive to Patreon for a short time" -what that means? Who can get access to your latest dev?
  5. I suspect you are using "Megadepth" or similar depth estimation approach. If so, is there is a way to output depth map sequence corresponded to interpolated video?
  6. Interpolated video has different length than original
(+1)(-1)

1) Fixed in 1.18

2) Scene detection was added in 1.17

3) Increase deduplication threshold

4) Patrons can

5) I do not use depth estimation, except for DAIN, but it's not possible to output the depth map

6) Should be fixed in 1.18, except for videos ending with duplicates/static frames, to be fixed later

I'm "All-access Patron" for GRisk "DAINAPP" - is that patronage not applicable here? If not, how to become a Patron here?

I have no affiliation with GRisk or DAIN-APP, so it's not. This is my Patreon: https://www.patreon.com/n00mkrad

https://github.com/baowenbo/DAIN/

In his implementation you can specify random target FPS , not just 2x,4x,8x. Such ability could be great addition to your APP.

This is only possible for  DAIN, might be added in the future. Not applicable for RIFE.

Doesn't accept QT ProRes sources. Any chance to "fix" that?

Send me a video sample with that format.

Deleted 37 days ago


I get this error when using RIFE to interpolate a 1080p image sequence from 15 FPS to 60.  Any reason why?

Fixed in 1.17

;w;

Cool tool... but for my needs not yet sufficient. I have several hours of old interlaced video files. They are in AVCHD format (1080/50i anamorph, the internal format is 1440x1080 interlaced), filmed with a Sony camcorder. So interpolation needs are different to get 1920x1080 50p. Perhaps an enhancement for the future... or a new product.

If you are looking for upscaling and de-interlacing, this is app not what you're looking for.

I'm planning to buy a new AMD GPU, but I'm at a loss because I can't use DAIN-APP.

I was delighted to learn that Flowframes allows me to use DAIN-APP on AMD GPUs, but then I learned about RIFE, which has better processing speed. It's Nvidia only.

The world is cruel.

Internally included is a RIFE beta for Vulcan-based AMD GPUs. I was able to use it on my Radeon 5700TX.

v18 (which just released on Patreon) has a new build of RIFE-NCNN, which runs on AMD hardware and is a lot faster than the one included in v16 right now.

(1 edit)

Nice program, I'm testing it out on Linux using WINE. The GUI opens just fine! After trying out a video, unfortunately both RIFE and CAIN run at around 1.5 FPS, making it unusable for me (using a 5700XT). I assume it's much faster on Windows 10, which I don't use. Hopefully gets fixed soon.

(1 edit)

Those speeds are normal for the NCNN engine, though in v18 they are getting a bit faster.

Don't expect getting to CUDA speeds though

It's going to take a lot of time until we get to speeds similar to svp or similar programs

ı used program but program mix frames of two diffirent scenes. can program detect scene changes.

(1 edit)

RIFE of v18 should be optimized for this.

Yes, the latest version on Patreon supports this.

It will become free in a few weeks.

Can you add a function such as language settings?

(2 edits)

I'm getting the same error (output folder does not contain frames) any time I try CAIN or DAIN. RIFE doesn't slow down the video but rather shows half the original video full speed and then freezes on a frame. Video is 1080p at 29.97

wait do i have to use the installer application everytime i want to use it or can i use the other .exe files

i'm getting this error when i try interpolate a 4k video...pls fix it

I had the same problem, discovered that for some reason it was using GPU '0,1' I changed to '0' and that problem was gone

(1 edit)

EDIT: It works fine in 1080p. Seems to be a problem with higher resolutions.

Where can I bug report?

When interpolating larger videos (like few minutes long) using RIFE i get error:

And everything gets deleted, no more info how to fix it.

To reproduce try interpolate The HU - Yuve Yuve Yu (Official Music Video)-v4xZUr0BEfE in 4K.

I'm getting this same error message. I'm trying to interpolate a 1080p video with RIFE.

How long is the video? What is original framerate? 

It's only about 80 frames long. Technically it's encoded with 29.97fps but I told FlowFrames it was 7.4925fps so the final result would be 29.97.

(1 edit)

try with integer number of fps?

Same here...i'm tryng to interpolate a 4k video...but i get this error,hope they fix it

Out of memory, you need more that 8gb of vram to do native 4k

you probably ran out of memory

disable parallel processing or downscale your video

doesn't work on rtx 3080 ... it makes some wired artifacts !! 

(1 edit)

you can try to replace the torch folder of the py folder in flowframes with the torch folder in DAIN-APP 0.48

it would be better to simply install python locally

go to the discord server for help on that

(+1)

I tried to use this with Wine on Linux but I get an authenticity/decryption error when trying to download the packages. Is there another way I can use these AI with a different frontend? This is super exciting technology :)

If you are on Linux, try the original code for RIFE.

https://github.com/hzwer/Arxiv2020-RIFE

Holy heck! Thank u

you are a legend 

Viewing most recent comments 1 to 40 of 45 · Next page · Last page