Jump to content

stgatilov

Active Developer
  • Posts

    7255
  • Joined

  • Last visited

  • Days Won

    280

Everything posted by stgatilov

  1. Yes, it would be great if you look through initialization and see why it shows that message. Of course, I can answer all your questions about code.
  2. Do you have basic knowledge of Visual Studio and C++ ?
  3. Maybe try Doom 2016 in OpenGL mode? I don't know what are other OpenGL games... Minecraft? Can you at least run OpenGL tech demos?
  4. By the way, which version of GCC, CMake and make do you use?
  5. Look at the file it points to. For some reason, header files contains trash. More precisely, it contains object files --- no idea how you got object files in header files.
  6. How about other games?
  7. A bit more fixes in svn rev 9608: now related to fog lights.
  8. Well, I'm almost certain you won't be able to do that. Of course, I'm known to be quite pessimistic... It is useless for dynamic situation. As soon as player mover light source close to doorway, everything breaks. And for static situation, precomputed stuff is much more realistic, and probably easier to achieve.
  9. It can be precompiled header at the beginning, and linking at the end.
  10. Unfortunately no. First of all, you do set /MP flag in project settings, which means "unlimited parallelism", and results in T or 2T threads per project. If you don't set this flag on the project, then cpp-s of one project will build strictly sequentally. You can also set /MP 2 to say "build 2 cpp files in parallel". So you do force MSVC to spawn specific number of threads per project (and exact number depends on machine). It is quite stupid that parallel builds are controlled via build settings of the project, instead of settings in .user file or user-specific settings in IDE. But that's how things work, as far as I know... Second, Visual Studio does not control number of threads properly, just see my example about 144 compiler processes on 12-thread CPU. There Is no way to say "spawn 12 parallel processes", you can control independently 1) number of parallel projects in IDE settings, and 2) number of compiler processes per project in project settings. The total number of projects can rise to the product of these two numbers. Perhaps something has changed recently or will change in future, I don't know. Here is related question BTW: https://stackoverflow.com/questions/45379427/how-to-limit-the-number-of-parallel-cl-exe-processes-during-the-visual-studio-so
  11. Yes, exactly. Of course, there is /MP to compile cpp files in parallel, but it does not always fill the CPU in practice, so building several projects at once is not useless. That's the real of problem of Visual Studio: given many projects and cpp files to build on T-core hyperthreaded CPU, it builds 2T projects simultaneously, and spawns 2T compilers per project to build cpp files in parallel. This gives you 4T^2 compilers at once, e.g. 144 on a six-core machine. Normally, compiler does not take too much RAM, but 1) overly c-plus-plus-y projects need more RAM (Eigen, I'm looking at you), and 2) 144 instances eat gigabytes. I even had to increase amount of RAM from 16 GB to 32 GB on my work machine, because when it depletes its RAM with 144 memory-hungry processes, remote desktop just stops responding. Anyway, if you have such problems on your machine, you can always limit the parallelism in Visual Studio settings. I don't see how can RAM suffer from too many requests. Also, even with 144 processes, only 12 of them are executed at once during each time quantum on 6-core hyperthreaded CPU, all the rest are sleeping. You don't get more cache pressure or RAM pressure beyond the number of threads your CPU supports, and the additional processes can kick in if the primary ones go to sleeping for some weird reason.
  12. As I wrote above, there is a general way to check for image format support: create framebuffer object with desired format and check it for completeness. There is special type of error for a driver to say "this is not supported". We do this check, but the OpenGL implementations under discussion don't report any error.
  13. We have at least 3 users affected by this: @Araneidae: OpenGL vendor: X.Org OpenGL renderer: AMD CAYMAN (DRM 2.50.0 / 5.9.8-100.fc32.x86_64, LLVM 10.0.1) OpenGL version: 4.3 (Core Profile) Mesa 20.2.2 core @zergrush: Can't find anything more specific than: Radeon card. On Linux. @Alberto Salvia Novella: OpenGL vendor: X.Org OpenGL renderer: AMD CYPRESS (DRM 2.50.0 / 5.10.63-1-MANJARO, LLVM 12.0.1) OpenGL version: 4.3 (Core Profile) Mesa 21.2.1 core I wonder if we should already introduce a hack for automatically switching to 32 bits on such platforms. Either by searching for CYPRESS, CAYMAN, and similar codenames of pre-GCN drivers... Or by checking main menu screenshot for corruption Interestingly, screenshot checking sounds more reliable to me.
  14. Things look wrong because the added light (let's call it VPL = virtual point light) doesn't cast shadows. One well-known algorithm for global diffuse illumination is "Instant Radiosity", where 1) there are several randomly generated VPLs, and 2) VPLs do cast shadows. Of course, a lot of work is needed to make it work fast enough. I'm afraid there is no way around light passing through walls. If you think more about it, there is no way to distinguish between "yes, we allow light to pass through this thing", and "no, this thing should cast shadows", especially if player can more light source. And if he cannot, then let's better discuss lightmaps
  15. I wonder what's the state of the new Game Connection GUI? I guess I'm not going to work on it in the near future, and it is actually done. The only thing which should be added is "Advanced Settings" for Restart Game button. I mean, "dmap" checkbox is OK as it is, but I imagine experienced users will be glad to control additional dmap arguments: noAAS: I recall someone wanted this checkbox: it allows to reduce dmap time in half if you don't care about AIs. noFlood: makes a lot of sense for a WIP map, since otherwise user is forced to fix all leaks before playing it.
  16. I guess default framebuffer cannot be multisampled, at least GLFW manual says that for the current implementation, and the previous code also disabled multisampling. Both implementations suffer from the problem, as far as I remember.
  17. I have found a related "bug": FBO with GL_RGBA16F texture format silent drawing corruption It turned out to be not a bug. OpenGL spec says that if either source of destination framebuffer are multisampled, then they must have the same internal format (4.3.2): I wonder if something like this could happen in our case. By the way, if someone who experiences the problem sets "r_ignoreGLErrors 0" and then turns on "r_fboColorBits 64", does it result in spam of "GL_CheckErrors: ???" messages in game console? Also I wonder if the issue happens always regardless of antialiasing, it if it magically disappears with antialiasing off...
  18. Decided to create a dedicated thread for this problem. First report was during 2.09 beta: And workaround was found pretty quickly:
  19. Try this torrent file. It contains one web seed and is not tracked, so technically it is not torrent in its full sense,. But at least putting it into torrent client will make download resumeable. release209a.torrent
  20. Some time ago I also got excessive linebreaks on the forums, although in my case whole lines were never broken into pieces, just the newlines that I put became way longer. I could remove them after-the-fact by editing my post twice. In the end I noticed that I accidentally disabled javascript via browser extension. I enabled it back, and now everything works perfectly.
  21. Yes, but given the age of GPU, it probably won't happen. Originally, the problem was narrowed thanks to this news: https://www.phoronix.com/scan.php?page=news_item&px=AMDGPU-FP16-DCE8-Patches&utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Phoronix+(Phoronix) But it is not clear what happened with this patch, which drivers it affects, and whether it is about our problem or something else.
  22. @Alberto Salvia Novella, crappy colors in menu is another bug, but this time rather surely of AMD driver. Series of old AMD GPUs don't support 64-bit colors, so you need to set "r_fboColorBits 32", or go to advanced tab of graphical settings and toggle Color Precision to 32. Unfortunately, we cannot do anything about it, because 1) GL 3.3 requires support of 64-bit color, and 2) the implementation should report unsupported formats via framebuffer incompleteness, and our code already checks for it and prints warning... but in this particular case the driver says everything is OK. There is no proper way to detect this problem. Well, I guess OpenAL decided to bump priority of its thread but your permissions don't allow that. I don't think it is a big issue, and I surely cannot do anything about it. Perhaps lack of flushing? We usually use either logFile cvar or condump command.
  23. Your screenshot shows revision 9540?
  24. Yes, there was some bureaucratic mess related to deprecation during GL 3.0 and 3.1, which was finally resolved in GL 3.2. Like "deprecated but not removed yet" which later turned into "removed in core". It can easily explain it worked before version bump, but stopped working after it.
×
×
  • Create New...