as the title says, on fairly decent specs (on a laptop), I struggle to get above 100 fps, and the game stutters every once in a while when rendering new chunks
CPU: i7-9750H at turbo capped to 3.2ghz
GPU: RTX 2060
RAM: 16gb ddr4 2666mhz
and Minecraft is installed on an NVME ssd
Java runtime being used is the latest java 8 jre on adoptopenjdk
And minecraft version is 1.16.3
I'm allocating 6gb to the java runtime, and I ensured in both windows graphics settings and nvidia control panel that the java executable is using the dedicated gpu instead of the integrated one
I tried both vanilla and with mods, installing fabric+lithium+sodium+phosphor only gave me an extra 20-30 fps compared to vanilla
Post a screenshot with F3 enabled and your JVM arguments.
Rollback Post to RevisionRollBack
Say something silly, Laugh 'til it hurts, Take a risk, Sing out loud, Rock the boat, Shake things up, Flirt with disaster, Buy something frivolous, Color outside the lines, Cause a scene, Order dessert, Make waves, Get carried away, Have a great day!
Allocate less memory to Minecraft, try -Xmx2G in your JVM arguments.
Rollback Post to RevisionRollBack
Say something silly, Laugh 'til it hurts, Take a risk, Sing out loud, Rock the boat, Shake things up, Flirt with disaster, Buy something frivolous, Color outside the lines, Cause a scene, Order dessert, Make waves, Get carried away, Have a great day!
Okay, so... Minecraft performance is almost like an art form at times. If you're looking for consistency with it (smoothness and fewer or smaller stutters), you pretty much need what is mostly overkill hardware, because the points of highest demands (which I believe un-generated chunk rendering is up there) are so much more demanding than most other things, that in order to get those spikes low enough to not be an issue, you need what will otherwise be overkill hardware.
Let me illustrate...
I'm intentionally using a really high render distance of 48 here. It will exaggerate this issue. I'm also using fast trees and no resource packs to try and minimize the GPU ever being a limitation. Also, I play with v-sync on (and triple buffering to avoid frame rate halving when it can't maintain 60 FPS).
Looks... not just playable, but smooth?
Not necessarily. It's not representative of all situations.
Here's what happens if I fly in a small circle.
Notice the spiking frame time for some frames. I don't think it's having to generate any new terrain; just load some already generated ones (still a rather heavy demand, just less so).
Here's what happens if I fly in a straight line to a point where it is generating new terrain.
Similar to before, but a bit worse. Chunk generation isn't keeping up with me moving.
Okay, let's see what happens if I use my settings I normally play with (same as above, but resource pack plus bushy leaves, fancy [smart, really] trees, and shaders even, but the render distance is now just 16 instead).
The difference should be apparent. And, the above settings moving around loading terrain.
Sorry for all the pictures; that is why the spoiler.
Now, this isn't to say you need overkill hardware just to play the game; not at all. It's just to say you'll probably need overkill hardware (more specifically, for the settings you're trying to use) IF you want consistency at the level you're asking about. If you don't mind not having super smooth consistency, the above is actually probably "playable" for some people.
So, in this case, if you're not getting that and you seek it, the first thing to try is to drop settings. Start with render distance.
Frame rate (as shown in-game) is deceptive to take at face value for smoothness in this game. As you're finding out, despite relatively high frame rate at times like in your screenshot, there can still be stutters. The stuttering during chunk generation is normal, and becomes exponentially worse the higher the render distance (every doubling of render distance, you quadruple the loaded chunks, not that all demand sget quadrupled but it is a heavier increase than linearly). It's a huge demand on the CPU especially, and Minecraft has gotten heavier over the years, whereas CPUs for the most part stopped getting meaningfully faster (at least in the way that matters for Minecraft here) year over year a long time ago. Press Alt+ F3 and watch the frame time graph in the bottom left; it'll show you the delays as you move around when terrain generation is occurring.
That said, 1.16 seems rather good compared to 1.13 and 1.14, but it's still a high expectation to seek "never dropping below 60 FPS, let alone 100+ FPS, while at high render distances". You might get a somewhat consistent frame rate once everything is loaded, but chunk generation or doing things such as playing will incur penalties.
Also, make sure "biome blend" is set lower, like try 3x3. I've heard that lag spikes during block placement especially and so on can be caused by that being high.
Sorry for the long-ish post, but as I said, the finer details of Minecraft performance is almost an art form. A screenshot with "I get xxx FPS" isn't necessarily a good indicator of smooth performance. As someone who has spent a lot of time playing at higher render distances (but ultimately settling on 16 as of now because I want to use shaders, and I get lag with them starting at around 20 at the settings/with the resource packs I play with), that's the best I can advise to help. I miss the increased views; people really underestimate how much it literally broadens your horizons, but the increased stutter/decreased consistency at those render distances is a thing, and not having that is wonderful. If you're not dead set on that render distance, I'd try 16 to 20 or so. Hope any of this helps clue you in more if you want to tune performance/settings, although ultimately it comes down to "settings/render distance is too high".
A lot of the issues that Princess_Garnet mentions are supposed to have been fixed in newer versions; they multithreaded chunk rendering in 1.8 and world generation in 1.13 (which shouldn't affect the client anyway since the server has already been run on a separate thread since 1.13). The only thing I can think of is either that Mojang's multithreading implementations are very poor or OpenGL itself is the problem, but I'd expect that the driver was written with multithreading in mind (there is a setting in the NVIDIA control panel called "threaded optimization", which should be enabled; many people may still have disabled it due to causing issues with pre-1.8 Optifine's multithreaded rendering option, which was remove din 1.8, along with other rendering options).
Also, I don't not have an overkill system by any means (the OP's CPU is about twice as fast overall, 40% faster per core) but I don't have issues with lag/stuttering, even in my mod's "Mega Forest" biomes, though FPS does drop quite a lot on Fancy when moving around (by one measure, it takes about 25 ms to render a chunk section entirely filled with Fancy leaves, which by itself represents a drop to 40 FPS. Interestingly, about 90% of this time is spent in a call to OpenGL, not Java code, which I have heavily optimized (I don't even use Optifine anymore, which isn't compatible in any case, and makes much less invasive changes) but it mainly benefits more usual cases, where chunks, including leaves on Fast, take 1-3 ms to render). Note that my rendering system (similar to vanilla before 1.8) updates chunks within the same thread as rendering (there is only a single client thread and single server thread, excluding chunk loading/saving and a few other minor threads), so the total frame time will include the time spent on chunk updates (my own chunk update limiter changes the amount of time a chunk update can take as a percentage of the idle time (minus rendering and client ticks), with at least 1 update per frame, so complex chunks will update at the frame rate while simple chunks can be updated many more times per frame. I have a similar limiter for lighting updates, which are limited to 1 ms per chunk or 10 ms per tick; as with rendering and world generation, light updates shouldn't be an issue in the latest versions since 1.14 multithreaded lighting, with issues like MC-11571 marked as fixed).
Of course, there are other major differences between my own modded version and current vanilla versions, such as the rate of object allocation and subsequent garbage collection; while idle with FPS capped to Vsync the game only allocates 1-2 MB per second, while newer versions may allocate 50-100 times that, with a much higher base memory usage (I've seen it as low as 60 MB at 8 chunks and around 200 MB at 16 chunks. Even this is not the true memory usage, which has been shown to be around 160 MB with VisualVM (123 MB for "Chunk" objects / 78.5% of total memory usage).
Also, there is a bug report regarding FPS issues since 1.15, which made major changes to rendering (among others, it fully removed the ancient "fixed function" OpenGL that older versions used - 1.6.4 only needs OpenGL 1.2 and relies on a "compatibility profile" to even run as all fixed-function rendering was removed from the OpenGL 3 spec (in particular, I've seen cases where somebody could only run 1.15+ with older versions complaining that there was no OpenGL). Supposedly, this should result in a significant performance increase, including the previous use of things like VBOs, which also have a huge advantage over deprecated display lists in that you can incrementally update them, while a display lists needs to be rebuilt from scratch - in theory, the game only needs to re-render individual blocks that have changed, not entire sections). Interestingly, vanilla 1.16 doesn't seem to show chunk updates, though the screenshots that Princess_Garnet shows, with Optifine, suggests they just decided not to show them.
Note that there was a similar issue for 1.8, which was closed as "lower settings/upgrade your system" but I've even seen people with top-end hardware (e.g. i9-9900K and RTX 2080) getting poor performance on 1.15-1.16, suggesting that a particular system configuration is at the root cause of these issues (including the 1.8 issue, where some people with higher-end hardware had issues while others with low-end hardware saw improvements; in both cases there were significant changes to rendering.
Also, it appears that since 1.15 the interior faces of leaves are not being culled on Fast graphics, negating its benefits (as mentioned above, rendering Fancy/non-culled leaves is quite expensive for OpenGL), though that wouldn't explain all cases of poor performance (interestingly, this bug first appeared in 1.8 and was fixed in 1.9; my old computer didn't see any improvements in 1.9 though, except maybe in jungles, though the issues I had were mainly due to poor server-side performance).
Another thing that I notice is that the OP's screenshot shows the server tick time to be 25 ms, which is quite high (anything less than 50 is good but I usually see only 1-2, even chunk generation only increases it to 5-7, despite no multithreaded world generation), suggesting that there is something else going on than just a rendering bottleneck since the client-server performance should be independent (e.g. while developing mods I've caused the server side to completely lock up in an infinite loop with no effect on client performance), with the exception of high packet traffic (in particular, vanilla 1.6.4 outputs packet overload warnings, with lag spikes, when teleporting to a newly generated area due to all the falling sand entities (gravel used to collapse into caves immediately after world generation, which was removed in a more recent vanilla version; I fixed this by placing the blocks on the surface below if there is air below when generating the vein so they never actually fall or cause packets to be sent).
I can only speak of my experiences as an end user, as I'm not well versed in the coding changes, but in 1.7.10 and prior, going between overworld to nether was fine. Starting in 1.8 (notice how I often call this the worst Minecraft update for a reason?), moving from the overworld to the nether incurred about a solid 5 to 10 seconds of EXTREMELY low frame rates. This was something I found odd if they multi-threaded it to always be loaded. This was still the case in 1.10 and I think 1.12, but I moved straight from 1.10.2 to 1.14.x, and in 1.14 it is so much more wonderful now like oh my goodness (I imagine 1.13 was the actual change point for this). It seems to me they have been been doing some improvements to the game lately (despite 1.13 initially being problematic). The game is ultimately getting heavier with a few key updates, but SOME things are also vastly improved (chunk loading being one; I tried 1.2.5 the other day which is one of the older, lightweight releases and even a somewhat normal render distance of 16 CRAWLED on it, unless this is OptiFine's magic that makes modern versions so graceful with this aspect).
Also, your mod is (I think) like super, super optimized and lightweight compared to modern Vanilla Minecraft. If you want super consistency with that AND at high render distances (I'm referring to like 24+ or so), I'm not really sure it's possible? New chunk generation just seems to be among the heavier of Minecraft's demands (worth noting that my current CPU does seem to handle Minecraft better than my old one, but I am not sure I attribute that to the extra multi-threading as CPU use still seems to resemble "one heavy thread and rest idle" so while I'm sure parts of the game make use of, and may be vastly sped up by, extra cores/threads, the game ultimately seems limited by raw core speed most of the time).
No idea about leaves, but OptiFine's "smart" trees setting (would only use this with a bushy leaves-alike resource pack/mod), it's make or break for me. With it off, I can run shaders at ~20 render distance. Adding bushy leaves changes that, but then the "smart" setting, which seems to only render to front-most faces of these blocks, gets me performance back.
To summarize for the OP, you're probably getting around the expected results for a render distance of 24 on that hardware. You could try OptiFine but I'm not sure if it's better than the stuff you tried. It's a very capable laptop but they also tend to have lower clock speeds and such too. Desktop CPUs these days go from 4 GHz+ to even beyond 5 GHz when boosting, and that makes a big difference too. I'd say decide if the views from that render distance are worth the lowered consistency, or go to something like 16 or 20. When you're on the ground and not a mountain or flying, the larger render distances give less but need more.
as the title says, on fairly decent specs (on a laptop), I struggle to get above 100 fps, and the game stutters every once in a while when rendering new chunks
CPU: i7-9750H at turbo capped to 3.2ghz
GPU: RTX 2060
RAM: 16gb ddr4 2666mhz
and Minecraft is installed on an NVME ssd
Java runtime being used is the latest java 8 jre on adoptopenjdk
And minecraft version is 1.16.3
I'm allocating 6gb to the java runtime, and I ensured in both windows graphics settings and nvidia control panel that the java executable is using the dedicated gpu instead of the integrated one
I tried both vanilla and with mods, installing fabric+lithium+sodium+phosphor only gave me an extra 20-30 fps compared to vanilla
anyone have any idea?
Post a screenshot with F3 enabled and your JVM arguments.
Make sure you are using the latest drivers - if yes, test it with an older driver
Also disable the Experience overlay:
https://www.howtogeek.com/271199/how-to-hide-the-nvidia-geforce-experiences-in-game-overlay-icons/
this is on vanilla, fancy graphics, vsync off, 24 render dist. All overlays are off and latest nvidia driver is currently in use
JVM args are -XX:+UseG1GC -Xmx6G -Xms6G -Dsun.rmi.dgc.server.gcInterval=2147483646 -XX:+UnlockExperimentalVMOptions -XX:G1NewSizePercent=20 -XX:G1ReservePercent=20 -XX:MaxGCPauseMillis=50 -XX:G1HeapRegionSize=32M
Is there an increase with reducing the render distance?
Going down to a render distance of 16 bumps me up by about 50+ fps
Allocate less memory to Minecraft, try -Xmx2G in your JVM arguments.
Allocating 2gb instead of 6gb dropped my fps by about 10-20ish
Try -Xmx4G at least.
Changing RAM has no influence on FPS
Is there the same issue on other versions? Test it with 1.12.2
On 1.12.2 I hover around 80-120 fps, never going under
False. Allocating too much memory can degrade performances.
The more RAM you allocate to Minecraft, the slower and more of an impact garbage collection will have on the game.
Okay, so... Minecraft performance is almost like an art form at times. If you're looking for consistency with it (smoothness and fewer or smaller stutters), you pretty much need what is mostly overkill hardware, because the points of highest demands (which I believe un-generated chunk rendering is up there) are so much more demanding than most other things, that in order to get those spikes low enough to not be an issue, you need what will otherwise be overkill hardware.
Let me illustrate...
Looks... not just playable, but smooth?
Not necessarily. It's not representative of all situations.
Here's what happens if I fly in a small circle.
Notice the spiking frame time for some frames. I don't think it's having to generate any new terrain; just load some already generated ones (still a rather heavy demand, just less so).
Here's what happens if I fly in a straight line to a point where it is generating new terrain.
Similar to before, but a bit worse. Chunk generation isn't keeping up with me moving.
Okay, let's see what happens if I use my settings I normally play with (same as above, but resource pack plus bushy leaves, fancy [smart, really] trees, and shaders even, but the render distance is now just 16 instead).
The difference should be apparent. And, the above settings moving around loading terrain.
Sorry for all the pictures; that is why the spoiler.
Now, this isn't to say you need overkill hardware just to play the game; not at all. It's just to say you'll probably need overkill hardware (more specifically, for the settings you're trying to use) IF you want consistency at the level you're asking about. If you don't mind not having super smooth consistency, the above is actually probably "playable" for some people.
So, in this case, if you're not getting that and you seek it, the first thing to try is to drop settings. Start with render distance.
Frame rate (as shown in-game) is deceptive to take at face value for smoothness in this game. As you're finding out, despite relatively high frame rate at times like in your screenshot, there can still be stutters. The stuttering during chunk generation is normal, and becomes exponentially worse the higher the render distance (every doubling of render distance, you quadruple the loaded chunks, not that all demand sget quadrupled but it is a heavier increase than linearly). It's a huge demand on the CPU especially, and Minecraft has gotten heavier over the years, whereas CPUs for the most part stopped getting meaningfully faster (at least in the way that matters for Minecraft here) year over year a long time ago. Press Alt+ F3 and watch the frame time graph in the bottom left; it'll show you the delays as you move around when terrain generation is occurring.
That said, 1.16 seems rather good compared to 1.13 and 1.14, but it's still a high expectation to seek "never dropping below 60 FPS, let alone 100+ FPS, while at high render distances". You might get a somewhat consistent frame rate once everything is loaded, but chunk generation or doing things such as playing will incur penalties.
Also, make sure "biome blend" is set lower, like try 3x3. I've heard that lag spikes during block placement especially and so on can be caused by that being high.
Sorry for the long-ish post, but as I said, the finer details of Minecraft performance is almost an art form. A screenshot with "I get xxx FPS" isn't necessarily a good indicator of smooth performance. As someone who has spent a lot of time playing at higher render distances (but ultimately settling on 16 as of now because I want to use shaders, and I get lag with them starting at around 20 at the settings/with the resource packs I play with), that's the best I can advise to help. I miss the increased views; people really underestimate how much it literally broadens your horizons, but the increased stutter/decreased consistency at those render distances is a thing, and not having that is wonderful. If you're not dead set on that render distance, I'd try 16 to 20 or so. Hope any of this helps clue you in more if you want to tune performance/settings, although ultimately it comes down to "settings/render distance is too high".
A lot of the issues that Princess_Garnet mentions are supposed to have been fixed in newer versions; they multithreaded chunk rendering in 1.8 and world generation in 1.13 (which shouldn't affect the client anyway since the server has already been run on a separate thread since 1.13). The only thing I can think of is either that Mojang's multithreading implementations are very poor or OpenGL itself is the problem, but I'd expect that the driver was written with multithreading in mind (there is a setting in the NVIDIA control panel called "threaded optimization", which should be enabled; many people may still have disabled it due to causing issues with pre-1.8 Optifine's multithreaded rendering option, which was remove din 1.8, along with other rendering options).
Also, I don't not have an overkill system by any means (the OP's CPU is about twice as fast overall, 40% faster per core) but I don't have issues with lag/stuttering, even in my mod's "Mega Forest" biomes, though FPS does drop quite a lot on Fancy when moving around (by one measure, it takes about 25 ms to render a chunk section entirely filled with Fancy leaves, which by itself represents a drop to 40 FPS. Interestingly, about 90% of this time is spent in a call to OpenGL, not Java code, which I have heavily optimized (I don't even use Optifine anymore, which isn't compatible in any case, and makes much less invasive changes) but it mainly benefits more usual cases, where chunks, including leaves on Fast, take 1-3 ms to render). Note that my rendering system (similar to vanilla before 1.8) updates chunks within the same thread as rendering (there is only a single client thread and single server thread, excluding chunk loading/saving and a few other minor threads), so the total frame time will include the time spent on chunk updates (my own chunk update limiter changes the amount of time a chunk update can take as a percentage of the idle time (minus rendering and client ticks), with at least 1 update per frame, so complex chunks will update at the frame rate while simple chunks can be updated many more times per frame. I have a similar limiter for lighting updates, which are limited to 1 ms per chunk or 10 ms per tick; as with rendering and world generation, light updates shouldn't be an issue in the latest versions since 1.14 multithreaded lighting, with issues like MC-11571 marked as fixed).
Of course, there are other major differences between my own modded version and current vanilla versions, such as the rate of object allocation and subsequent garbage collection; while idle with FPS capped to Vsync the game only allocates 1-2 MB per second, while newer versions may allocate 50-100 times that, with a much higher base memory usage (I've seen it as low as 60 MB at 8 chunks and around 200 MB at 16 chunks. Even this is not the true memory usage, which has been shown to be around 160 MB with VisualVM (123 MB for "Chunk" objects / 78.5% of total memory usage).
Also, there is a bug report regarding FPS issues since 1.15, which made major changes to rendering (among others, it fully removed the ancient "fixed function" OpenGL that older versions used - 1.6.4 only needs OpenGL 1.2 and relies on a "compatibility profile" to even run as all fixed-function rendering was removed from the OpenGL 3 spec (in particular, I've seen cases where somebody could only run 1.15+ with older versions complaining that there was no OpenGL). Supposedly, this should result in a significant performance increase, including the previous use of things like VBOs, which also have a huge advantage over deprecated display lists in that you can incrementally update them, while a display lists needs to be rebuilt from scratch - in theory, the game only needs to re-render individual blocks that have changed, not entire sections). Interestingly, vanilla 1.16 doesn't seem to show chunk updates, though the screenshots that Princess_Garnet shows, with Optifine, suggests they just decided not to show them.
Note that there was a similar issue for 1.8, which was closed as "lower settings/upgrade your system" but I've even seen people with top-end hardware (e.g. i9-9900K and RTX 2080) getting poor performance on 1.15-1.16, suggesting that a particular system configuration is at the root cause of these issues (including the 1.8 issue, where some people with higher-end hardware had issues while others with low-end hardware saw improvements; in both cases there were significant changes to rendering.
Also, it appears that since 1.15 the interior faces of leaves are not being culled on Fast graphics, negating its benefits (as mentioned above, rendering Fancy/non-culled leaves is quite expensive for OpenGL), though that wouldn't explain all cases of poor performance (interestingly, this bug first appeared in 1.8 and was fixed in 1.9; my old computer didn't see any improvements in 1.9 though, except maybe in jungles, though the issues I had were mainly due to poor server-side performance).
Another thing that I notice is that the OP's screenshot shows the server tick time to be 25 ms, which is quite high (anything less than 50 is good but I usually see only 1-2, even chunk generation only increases it to 5-7, despite no multithreaded world generation), suggesting that there is something else going on than just a rendering bottleneck since the client-server performance should be independent (e.g. while developing mods I've caused the server side to completely lock up in an infinite loop with no effect on client performance), with the exception of high packet traffic (in particular, vanilla 1.6.4 outputs packet overload warnings, with lag spikes, when teleporting to a newly generated area due to all the falling sand entities (gravel used to collapse into caves immediately after world generation, which was removed in a more recent vanilla version; I fixed this by placing the blocks on the surface below if there is air below when generating the vein so they never actually fall or cause packets to be sent).
TheMasterCaver's First World - possibly the most caved-out world in Minecraft history - includes world download.
TheMasterCaver's World - my own version of Minecraft largely based on my views of how the game should have evolved since 1.6.4.
Why do I still play in 1.6.4?
I can only speak of my experiences as an end user, as I'm not well versed in the coding changes, but in 1.7.10 and prior, going between overworld to nether was fine. Starting in 1.8 (notice how I often call this the worst Minecraft update for a reason?), moving from the overworld to the nether incurred about a solid 5 to 10 seconds of EXTREMELY low frame rates. This was something I found odd if they multi-threaded it to always be loaded. This was still the case in 1.10 and I think 1.12, but I moved straight from 1.10.2 to 1.14.x, and in 1.14 it is so much more wonderful now like oh my goodness (I imagine 1.13 was the actual change point for this). It seems to me they have been been doing some improvements to the game lately (despite 1.13 initially being problematic). The game is ultimately getting heavier with a few key updates, but SOME things are also vastly improved (chunk loading being one; I tried 1.2.5 the other day which is one of the older, lightweight releases and even a somewhat normal render distance of 16 CRAWLED on it, unless this is OptiFine's magic that makes modern versions so graceful with this aspect).
Also, your mod is (I think) like super, super optimized and lightweight compared to modern Vanilla Minecraft. If you want super consistency with that AND at high render distances (I'm referring to like 24+ or so), I'm not really sure it's possible? New chunk generation just seems to be among the heavier of Minecraft's demands (worth noting that my current CPU does seem to handle Minecraft better than my old one, but I am not sure I attribute that to the extra multi-threading as CPU use still seems to resemble "one heavy thread and rest idle" so while I'm sure parts of the game make use of, and may be vastly sped up by, extra cores/threads, the game ultimately seems limited by raw core speed most of the time).
No idea about leaves, but OptiFine's "smart" trees setting (would only use this with a bushy leaves-alike resource pack/mod), it's make or break for me. With it off, I can run shaders at ~20 render distance. Adding bushy leaves changes that, but then the "smart" setting, which seems to only render to front-most faces of these blocks, gets me performance back.
To summarize for the OP, you're probably getting around the expected results for a render distance of 24 on that hardware. You could try OptiFine but I'm not sure if it's better than the stuff you tried. It's a very capable laptop but they also tend to have lower clock speeds and such too. Desktop CPUs these days go from 4 GHz+ to even beyond 5 GHz when boosting, and that makes a big difference too. I'd say decide if the views from that render distance are worth the lowered consistency, or go to something like 16 or 20. When you're on the ground and not a mountain or flying, the larger render distances give less but need more.