Search the Community
Searched results for '/tags/forums/poor model texture alignment/' or tags 'forums/poor model texture alignment/q=/tags/forums/poor model texture alignment/&'.
-
Ah that's a bummer, because I think it could be beneficial for texture size. Well ok
-
Source? I try to find info that directly specifies that. I can only find that "3Dc" and "BC5" and "ATI2" are the same. I read that RGTC is BC4 and BC5 I guess @duzenko talks about it in this topic: I actually don't really want to dig to deep into it (although it's kind of interesting), I just want to make sure the info on this page is correct: https://wiki.thedarkmod.com/index.php?title=File_formats#DDS Now it seems there's partly double info. I think using BC# is most useful, because it's easiest to find good info about it online. Reading at the info here: https://www.reedbeta.com/blog/understanding-bcn-texture-compression-formats/#bc4 I would think that BC4 could be good for specular maps (or black and white diffusemaps), although maybe it doesn't matter? Although if people normally use DXT5 (BC3) for specular maps, then BC4 is better, because it's the same size as DXT1. If there's full support for RGTC (which supposedly is the same as BC4 and BC5) then is BC4 also supported?
-
Is there a high resolution image of the Golden Clock icon?
Guest replied to demagogue's topic in Art Assets
I think at this point it might be useful to rework the logo as a 3D model, so you can render it in any resolution you need. You could do the same for the TDM starting video too. Either that or maybe using vector app like Corel or InkScape, whatever is easier for you. This could be a fun little project in itself -
You are assuming way too much, and you're creating artificial pressure. Nobody in the public forums knows whether TDM team even considered including this feature in TDM core, regardless of its completion state or quality. Obviously you have right to have a personal opinion on the modification itself (which I think is cool too). Sorry for the offtopic here Jivo, you're doing great, and don't let this little side note distract you from your work
-
[Bug] On Launch, Distorted First Frame When AA
Daft Mugi replied to Daft Mugi's topic in TDM Tech Support
@stgatilov Even better news! The following commit fixed this issue regardless of r_tonemapOnlyGame3d setting. r10930 | stgatilov | 2025-01-25 | 7 lines Clear background during main menu. This fixes the issue e.g. with AT1: Lucy and tonemap disabled. The briefing there does not use any backgrounds, and no-clear policy results in HOM-like effects. Originally reported here: https://forums.thedarkmod.com/index.php?/topic/22635-beta-testing-213/#findComment-499723 -
"3Dc", "BC5", "RGTC", "ATI2" are texture compression equivalents of "panther", "puma", "cougar" and "mountain lion": all different names for the same thing. The article is correct, but you need to look carefully at the wording: "Each channel uses the same compression technique as DXT5 alpha". DXT5 has smooth alpha but blocky RGB, that's why vanilla D3 swapped the red and alpha channels for normal maps (so at least the red channel would be smooth, even though the green channel would still be blocky). The idea behind BC5 is to use the same technique for both red and green channels, while leaving out the blue entirely (because it contains no additional information in a normal map, and can be reconstructed in the shader). This gives an effective compression ratio of 50% compared to the uncompressed image, without introducing the blocky artifacts of DXT1/3/5 normal maps.
-
Both of them are far from fitting AI category: monocycle is an animation with extra steps, and spider is using invisible nonsolid ai spider + separate model for every part of leg. every part is operated by script reading ai spider's joint position and angle - and modify it. In this case upper front appendiges are spider's front legs moved far up, pincers use spider's mandibles animation. You can make humanoid ai have backward bending knees, or even reverse its movement so invisible ai is walking on the ground but visible parts are walking on ceiling. Ofcourse it require that ceiling geometry mirrors floor shape, and ai sees and reacts as it is actually on the ground.
-
Can DR be used with engines like Godot?
Skaruts replied to Skaruts's topic in DarkRadiant Feedback and Development
@OrbWeaverdo you think you could point me to the part of the code where vertex manipulation is handled? Maybe by looking at it I could figure out how the texture shifting is calculated as a consequence of moving vertices around. -
Please do. I can clearly see the lack of illumination on the new texture even without bloom enabled on either 2.12 and 2.13. I agree that the texture being called "unlit" is a bad name for it but mappers often choose the textures based on appearance rather than name so who knows how many missions we have where the author wanted an unlit look but couldn't get one vs wanted the current blue lit appearance.
-
Isn't this the method described in this wiki tutorial? https://wiki.thedarkmod.com/index.php?title=Full-Screen_Video_Cutscenes#Movie_Theatre_Method_2 This seems like a video as a texture on a surface.
-
/* =================== SCR_SetBrightness Enables setting of brightness without having to do any mondo fancy shite. It's assumed that we're in the 2D view for this... Basically, what it does is multiply framebuffer colours by an incoming constant between 0 and 2 =================== */ void SCR_SetBrightness(float brightfactor) { // divide by 2 cos the blendfunc will sum src and dst const GLfloat brightblendcolour[4] = {0, 0, 0, 0.5f * brightfactor}; const GLfloat constantwhite[4] = {1, 1, 1, 1}; // don't trust == with floats, don;t bother if it's 1 cos it does nothing to the end result!!! if(brightfactor > 0.99 && brightfactor < 1.01) { return; } glColor4fv(constantwhite); glEnable(GL_BLEND); glDisable(GL_ALPHA_TEST); glBlendFunc(GL_DST_COLOR, GL_SRC_COLOR); // combine hack... // this is weird cos it uses a texture but actually doesn't - the parameters of the // combiner function only use the incoming fragment colour and a constant colour... // you could actually bind any texture you care to mention and get the very same result... // i've decided not to bind any at all... glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE); glTexEnvf(GL_TEXTURE_ENV, GL_COMBINE_RGB, GL_MODULATE); glTexEnvf(GL_TEXTURE_ENV, GL_SOURCE0_RGB, GL_CONSTANT); glTexEnvf(GL_TEXTURE_ENV, GL_OPERAND0_RGB, GL_SRC_ALPHA); glTexEnvf(GL_TEXTURE_ENV, GL_SOURCE1_RGB, GL_PRIMARY_COLOR); glTexEnvf(GL_TEXTURE_ENV, GL_OPERAND1_RGB, GL_SRC_COLOR); glTexEnvfv(GL_TEXTURE_ENV, GL_TEXTURE_ENV_COLOR, brightblendcolour); glBegin(GL_QUADS); glTexCoord2f(0, 0); glVertex2f(0, 0); glTexCoord2f(0, 1); glVertex2f(0, GL_State.OrthoHeight); glTexCoord2f(1, 1); glVertex2f(GL_State.OrthoWidth, GL_State.OrthoHeight); glTexCoord2f(1, 0); glVertex2f(GL_State.OrthoWidth, 0); glEnd(); // restore combiner function colour to white so as not to mess up texture state glTexEnvfv(GL_TEXTURE_ENV, GL_TEXTURE_ENV_COLOR, constantwhite); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); glDisable(GL_BLEND); glEnable(GL_ALPHA_TEST); glColor4fv(constantwhite); } ah well a little bigger but it is still only one function to set gamma call it with the gamma value to substitute brightfactor. will ofc need to be modified for dhewm3 but should not be to bad, atm we just need to get the values for GL_State.OrthoWidth and GL_State.OrthoHeight.
-
yeah drivers seem to have matured a bit but as you note there are still some problems. well considering its there second model im not to surpriced, it does take time to reach a certain level of competence in this market. yeah it happened once before where they dropped the gpu market instead focussing on the iGPU side, so it might happen again only time and sales numbers will tell .
-
its come a long way allready but i cannot say it will work for all titles yet . kinda strange that the pcie bus is limited to x8 for the battlemage ???. tests show that in 4k it takes a quite nice leap over the 4060 ti fe model which gets even better at 1440 resolutions, in 1080 the 4060 ti fe takes the lead somewhat so the card is not as good in low reolutions. so 1440 res seems the sweet spot for it.
-
the code pieces that needs modifying are these -> /* ================== RB_GLSL_DrawInteraction ================== */ static void RB_GLSL_DrawInteraction( const drawInteraction_t *din ) { /* Half Lambertian constants */ static const float whalf[] = { 0.0f, 0.0f, 0.0f, 0.5f }; static const float wzero[] = { 0.0f, 0.0f, 0.0f, 0.0f }; static const float wone[] = { 0.0f, 0.0f, 0.0f, 1.0f }; // load all the vertex program parameters qglUniform4fv( u_light_origin, 1, din->localLightOrigin.ToFloatPtr() ); qglUniform4fv( u_view_origin, 1, din->localViewOrigin.ToFloatPtr() ); qglUniformMatrix2x4fv( u_diffMatrix, 1, GL_FALSE, RB_GLSL_MakeMatrix( DIFFMATRIX( 0 ), DIFFMATRIX( 1 ) ) ); qglUniformMatrix2x4fv( u_bumpMatrix, 1, GL_FALSE, RB_GLSL_MakeMatrix( BUMPMATRIX( 0 ), BUMPMATRIX( 1 ) ) ); qglUniformMatrix2x4fv( u_specMatrix, 1, GL_FALSE, RB_GLSL_MakeMatrix( SPECMATRIX( 0 ), SPECMATRIX( 1 ) ) ); qglUniformMatrix4fv( u_projMatrix, 1, GL_FALSE, RB_GLSL_MakeMatrix( PROJMATRIX( 0 ), PROJMATRIX( 1 ), wzero, PROJMATRIX( 2 ) ) ); qglUniformMatrix4fv( u_fallMatrix, 1, GL_FALSE, RB_GLSL_MakeMatrix( PROJMATRIX( 3 ), whalf, wzero, wone ) ); /* Lambertian constants */ static const float zero[4] = { 0.0f, 0.0f, 0.0f, 0.0f }; static const float one[4] = { 1.0f, 1.0f, 1.0f, 1.0f }; static const float negOne[4] = { -1.0f, -1.0f, -1.0f, -1.0f }; switch ( din->vertexColor ) { case SVC_IGNORE: qglUniform4fv( u_color_modulate, 1, zero ); qglUniform4fv( u_color_add, 1, one ); break; case SVC_MODULATE: qglUniform4fv( u_color_modulate, 1, one ); qglUniform4fv( u_color_add, 1, zero ); break; case SVC_INVERSE_MODULATE: qglUniform4fv( u_color_modulate, 1, negOne ); qglUniform4fv( u_color_add, 1, one ); break; } // set the constant colors qglUniform4fv( u_constant_diffuse, 1, din->diffuseColor.ToFloatPtr() ); qglUniform4fv( u_constant_specular, 1, din->specularColor.ToFloatPtr() ); // TODO: shader gamma for GLSL. // set the textures RB_ARB2_BindInteractionTextureSet( din ); // draw it RB_DrawElementsWithCounters( din->surf->geo ); } /* ================== RB_ARB2_DrawInteraction ================== */ static void RB_ARB2_DrawInteraction( const drawInteraction_t *din ) { // load all the vertex program parameters qglProgramEnvParameter4fvARB( GL_VERTEX_PROGRAM_ARB, PP_LIGHT_ORIGIN, din->localLightOrigin.ToFloatPtr() ); qglProgramEnvParameter4fvARB( GL_VERTEX_PROGRAM_ARB, PP_VIEW_ORIGIN, din->localViewOrigin.ToFloatPtr() ); qglProgramEnvParameter4fvARB( GL_VERTEX_PROGRAM_ARB, PP_LIGHT_PROJECT_S, din->lightProjection[0].ToFloatPtr() ); qglProgramEnvParameter4fvARB( GL_VERTEX_PROGRAM_ARB, PP_LIGHT_PROJECT_T, din->lightProjection[1].ToFloatPtr() ); qglProgramEnvParameter4fvARB( GL_VERTEX_PROGRAM_ARB, PP_LIGHT_PROJECT_Q, din->lightProjection[2].ToFloatPtr() ); qglProgramEnvParameter4fvARB( GL_VERTEX_PROGRAM_ARB, PP_LIGHT_FALLOFF_S, din->lightProjection[3].ToFloatPtr() ); qglProgramEnvParameter4fvARB( GL_VERTEX_PROGRAM_ARB, PP_BUMP_MATRIX_S, din->bumpMatrix[0].ToFloatPtr() ); qglProgramEnvParameter4fvARB( GL_VERTEX_PROGRAM_ARB, PP_BUMP_MATRIX_T, din->bumpMatrix[1].ToFloatPtr() ); qglProgramEnvParameter4fvARB( GL_VERTEX_PROGRAM_ARB, PP_DIFFUSE_MATRIX_S, din->diffuseMatrix[0].ToFloatPtr() ); qglProgramEnvParameter4fvARB( GL_VERTEX_PROGRAM_ARB, PP_DIFFUSE_MATRIX_T, din->diffuseMatrix[1].ToFloatPtr() ); qglProgramEnvParameter4fvARB( GL_VERTEX_PROGRAM_ARB, PP_SPECULAR_MATRIX_S, din->specularMatrix[0].ToFloatPtr() ); qglProgramEnvParameter4fvARB( GL_VERTEX_PROGRAM_ARB, PP_SPECULAR_MATRIX_T, din->specularMatrix[1].ToFloatPtr() ); // testing fragment based normal mapping if ( r_testARBProgram.GetBool() ) { qglProgramEnvParameter4fvARB( GL_FRAGMENT_PROGRAM_ARB, 2, din->localLightOrigin.ToFloatPtr() ); qglProgramEnvParameter4fvARB( GL_FRAGMENT_PROGRAM_ARB, 3, din->localViewOrigin.ToFloatPtr() ); } static const float zero[4] = { 0.0f, 0.0f, 0.0f, 0.0f }; static const float one[4] = { 1.0f, 1.0f, 1.0f, 1.0f }; static const float negOne[4] = { -1.0f, -1.0f, -1.0f, -1.0f }; switch ( din->vertexColor ) { case SVC_IGNORE: qglProgramEnvParameter4fvARB( GL_VERTEX_PROGRAM_ARB, PP_COLOR_MODULATE, zero ); qglProgramEnvParameter4fvARB( GL_VERTEX_PROGRAM_ARB, PP_COLOR_ADD, one ); break; case SVC_MODULATE: qglProgramEnvParameter4fvARB( GL_VERTEX_PROGRAM_ARB, PP_COLOR_MODULATE, one ); qglProgramEnvParameter4fvARB( GL_VERTEX_PROGRAM_ARB, PP_COLOR_ADD, zero ); break; case SVC_INVERSE_MODULATE: qglProgramEnvParameter4fvARB( GL_VERTEX_PROGRAM_ARB, PP_COLOR_MODULATE, negOne ); qglProgramEnvParameter4fvARB( GL_VERTEX_PROGRAM_ARB, PP_COLOR_ADD, one ); break; } // set the constant colors qglProgramEnvParameter4fvARB( GL_FRAGMENT_PROGRAM_ARB, 0, din->diffuseColor.ToFloatPtr() ); qglProgramEnvParameter4fvARB( GL_FRAGMENT_PROGRAM_ARB, 1, din->specularColor.ToFloatPtr() ); // DG: brightness and gamma in shader as program.env[4] if ( r_gammaInShader.GetBool() ) { // program.env[4].xyz are all r_brightness, program.env[4].w is 1.0f / r_gamma float parm[4]; parm[0] = parm[1] = parm[2] = r_brightness.GetFloat(); parm[3] = 1.0f / r_gamma.GetFloat(); // 1.0f / gamma so the shader doesn't have to do this calculation qglProgramEnvParameter4fvARB( GL_FRAGMENT_PROGRAM_ARB, PP_GAMMA_BRIGHTNESS, parm ); } // set the textures RB_ARB2_BindInteractionTextureSet( din ); // draw it RB_DrawElementsWithCounters( din->surf->geo ); } i left in the arb version so you can get an idea of what is going on, also this piece -> /* ================= R_LoadARBProgram ================= */ void R_LoadARBProgram( int progIndex ) { int ofs; int err; char *buffer; char *start = NULL, *end; #if D3_INTEGRATE_SOFTPART_SHADERS if ( progs[progIndex].ident == VPROG_SOFT_PARTICLE || progs[progIndex].ident == FPROG_SOFT_PARTICLE ) { // these shaders are loaded directly from a string common->Printf( "<internal> %s", progs[progIndex].name ); const char *srcstr = ( progs[progIndex].ident == VPROG_SOFT_PARTICLE ) ? softpartVShader : softpartFShader; // copy to stack memory buffer = ( char * )_alloca( strlen( srcstr ) + 1 ); strcpy( buffer, srcstr ); } else #endif // D3_INTEGRATE_SOFTPART_SHADERS { idStr fullPath = "glprogs/"; fullPath += progs[progIndex].name; char *fileBuffer; common->Printf( "%s", fullPath.c_str() ); // load the program even if we don't support it, so // fs_copyfiles can generate cross-platform data dumps fileSystem->ReadFile( fullPath.c_str(), ( void ** )&fileBuffer, NULL ); if ( !fileBuffer ) { common->Printf( ": File not found\n" ); return; } // copy to stack memory and free buffer = ( char * )_alloca( strlen( fileBuffer ) + 1 ); strcpy( buffer, fileBuffer ); fileSystem->FreeFile( fileBuffer ); } if ( !glConfig.isInitialized ) { return; } // // submit the program string at start to GL // if ( progs[progIndex].ident == 0 ) { // allocate a new identifier for this program progs[progIndex].ident = PROG_USER + progIndex; } // vertex and fragment programs can both be present in a single file, so // scan for the proper header to be the start point, and stamp a 0 in after the end if ( progs[progIndex].target == GL_VERTEX_PROGRAM_ARB ) { if ( !glConfig.ARBVertexProgramAvailable ) { common->Printf( ": GL_VERTEX_PROGRAM_ARB not available\n" ); return; } start = strstr( buffer, "!!ARBvp" ); } if ( progs[progIndex].target == GL_FRAGMENT_PROGRAM_ARB ) { if ( !glConfig.ARBFragmentProgramAvailable ) { common->Printf( ": GL_FRAGMENT_PROGRAM_ARB not available\n" ); return; } start = strstr( buffer, "!!ARBfp" ); } if ( !start ) { common->Printf( ": !!ARB not found\n" ); return; } end = strstr( start, "END" ); if ( !end ) { common->Printf( ": END not found\n" ); return; } end[3] = 0; // DG: hack gamma correction into shader. TODO shader gamma for GLSL if ( r_gammaInShader.GetBool() && progs[progIndex].target == GL_FRAGMENT_PROGRAM_ARB && strstr( start, "nodhewm3gammahack" ) == NULL ) { // note that strlen("dhewm3tmpres") == strlen("result.color") const char *tmpres = "TEMP dhewm3tmpres; # injected by dhewm3 for gamma correction\n"; // Note: program.env[21].xyz = r_brightness; program.env[21].w = 1.0/r_gamma // outColor.rgb = pow(dhewm3tmpres.rgb*r_brightness, vec3(1.0/r_gamma)) // outColor.a = dhewm3tmpres.a; const char *extraLines = "# gamma correction in shader, injected by dhewm3\n" // MUL_SAT clamps the result to [0, 1] - it must not be negative because // POW might not work with a negative base (it looks wrong with intel's Linux driver) // and clamping values > 1 to 1 is ok because when writing to result.color // it's clamped anyway and pow(base, exp) is always >= 1 for base >= 1 "MUL_SAT dhewm3tmpres.xyz, program.env[21], dhewm3tmpres;\n" // first multiply with brightness "POW result.color.x, dhewm3tmpres.x, program.env[21].w;\n" // then do pow(dhewm3tmpres.xyz, vec3(1/gamma)) "POW result.color.y, dhewm3tmpres.y, program.env[21].w;\n" // (apparently POW only supports scalars, not whole vectors) "POW result.color.z, dhewm3tmpres.z, program.env[21].w;\n" "MOV result.color.w, dhewm3tmpres.w;\n" // alpha remains unmodified "\nEND\n\n"; // we add this block right at the end, replacing the original "END" string int fullLen = strlen( start ) + strlen( tmpres ) + strlen( extraLines ); char *outStr = ( char * )_alloca( fullLen + 1 ); // add tmpres right after OPTION line (if any) char *insertPos = R_FindArbShaderComment( start, "OPTION" ); if ( insertPos == NULL ) { // no OPTION? then just put it after the first line (usually something like "!!ARBfp1.0\n") insertPos = start; } // but we want the position *after* that line while ( *insertPos != '\0' && *insertPos != '\n' && *insertPos != '\r' ) { ++insertPos; } // skip the newline character(s) as well while ( *insertPos == '\n' || *insertPos == '\r' ) { ++insertPos; } // copy text up to insertPos int curLen = insertPos - start; memcpy( outStr, start, curLen ); // copy tmpres ("TEMP dhewm3tmpres; # ..") memcpy( outStr + curLen, tmpres, strlen( tmpres ) ); curLen += strlen( tmpres ); // copy remaining original shader up to (excluding) "END" int remLen = end - insertPos; memcpy( outStr + curLen, insertPos, remLen ); curLen += remLen; outStr[curLen] = '\0'; // make sure it's NULL-terminated so normal string functions work // replace all existing occurrences of "result.color" with "dhewm3tmpres" for ( char *resCol = strstr( outStr, "result.color" ); resCol != NULL; resCol = strstr( resCol + 13, "result.color" ) ) { memcpy( resCol, "dhewm3tmpres", 12 ); // both strings have the same length. // if this was part of "OUTPUT bla = result.color;", replace // "OUTPUT bla" with "ALIAS bla" (so it becomes "ALIAS bla = dhewm3tmpres;") char *s = resCol - 1; // first skip whitespace before "result.color" while ( s > outStr && ( *s == ' ' || *s == '\t' ) ) { --s; } // if there's no '=' before result.color, this line can't be affected if ( *s != '=' || s <= outStr + 8 ) { continue; // go on with next "result.color" in the for-loop } --s; // we were on '=', so go to the char before and it's time to skip whitespace again while ( s > outStr && ( *s == ' ' || *s == '\t' ) ) { --s; } // now we should be at the end of "bla" (or however the variable/alias is called) if ( s <= outStr + 7 || !R_IsArbIdentifier( *s ) ) { continue; } --s; // skip all the remaining chars that are legal in identifiers while ( s > outStr && R_IsArbIdentifier( *s ) ) { --s; } // there should be at least one space/tab between "OUTPUT" and "bla" if ( s <= outStr + 6 || ( *s != ' ' && *s != '\t' ) ) { continue; } --s; // skip remaining whitespace (if any) while ( s > outStr && ( *s == ' ' || *s == '\t' ) ) { --s; } // now we should be at "OUTPUT" (specifically at its last 'T'), // if this is indeed such a case if ( s <= outStr + 5 || *s != 'T' ) { continue; } s -= 5; // skip to start of "OUTPUT", if this is indeed "OUTPUT" if ( idStr::Cmpn( s, "OUTPUT", 6 ) == 0 ) { // it really is "OUTPUT" => replace "OUTPUT" with "ALIAS " memcpy( s, "ALIAS ", 6 ); } } assert( curLen + strlen( extraLines ) <= fullLen ); // now add extraLines that calculate and set a gamma-corrected result.color // strcat() should be safe because fullLen was calculated taking all parts into account strcat( outStr, extraLines ); start = outStr; } qglBindProgramARB( progs[progIndex].target, progs[progIndex].ident ); qglGetError(); qglProgramStringARB( progs[progIndex].target, GL_PROGRAM_FORMAT_ASCII_ARB, strlen( start ), start ); err = qglGetError(); qglGetIntegerv( GL_PROGRAM_ERROR_POSITION_ARB, ( GLint * )&ofs ); if ( err == GL_INVALID_OPERATION ) { const GLubyte *str = qglGetString( GL_PROGRAM_ERROR_STRING_ARB ); common->Printf( "\nGL_PROGRAM_ERROR_STRING_ARB: %s\n", str ); if ( ofs < 0 ) { common->Printf( "GL_PROGRAM_ERROR_POSITION_ARB < 0 with error\n" ); } else if ( ofs >= ( int )strlen( start ) ) { common->Printf( "error at end of program\n" ); } else { int printOfs = Max( ofs - 20, 0 ); // DG: print some more context common->Printf( "error at %i:\n%s", ofs, start + printOfs ); } return; } if ( ofs != -1 ) { common->Printf( "\nGL_PROGRAM_ERROR_POSITION_ARB != -1 without error\n" ); return; } common->Printf( "\n" ); } in draw_arb2.cpp and /* ================== RB_SetProgramEnvironment Sets variables that can be used by all vertex programs [SteveL #3877] Note on the use of fragment program environmental variables. Parameters 0 and 1 are set here to allow conversion of screen coordinates to texture coordinates, for use when sampling _currentRender. Those same parameters 0 and 1, plus 2 and 3, are given entirely different meanings in draw_arb2.cpp while light interactions are being drawn. This function is called again before currentRender size is needed by post processing effects are done, so there's no clash. Only parameters 0..3 were in use before #3877 - and in dhewm3 also 4, for gamma in shader. Now I've used a new parameter 22 for the size of _currentDepth. It's needed throughout, including by light interactions, and its size might in theory differ from _currentRender. Parameters 23 and 24 are used by soft particles #3878. Note these can be freely reused by different draw calls. ================== */ void RB_SetProgramEnvironment( bool isPostProcess ) { float parm[4]; int pot; if ( !glConfig.ARBVertexProgramAvailable ) { return; } #if 0 // screen power of two correction factor, one pixel in so we don't get a bilerp // of an uncopied pixel int w = backEnd.viewDef->viewport.x2 - backEnd.viewDef->viewport.x1 + 1; pot = globalImages->currentRenderImage->uploadWidth; if ( w == pot ) { parm[0] = 1.0; } else { parm[0] = ( float )( w - 1 ) / pot; } int h = backEnd.viewDef->viewport.y2 - backEnd.viewDef->viewport.y1 + 1; pot = globalImages->currentRenderImage->uploadHeight; if ( h == pot ) { parm[1] = 1.0; } else { parm[1] = ( float )( h - 1 ) / pot; } parm[2] = 0; parm[3] = 1; qglProgramEnvParameter4fvARB( GL_VERTEX_PROGRAM_ARB, 0, parm ); #else // screen power of two correction factor, assuming the copy to _currentRender // also copied an extra row and column for the bilerp int w = backEnd.viewDef->viewport.x2 - backEnd.viewDef->viewport.x1 + 1; pot = globalImages->currentRenderImage->uploadWidth; parm[0] = ( float )w / pot; int h = backEnd.viewDef->viewport.y2 - backEnd.viewDef->viewport.y1 + 1; pot = globalImages->currentRenderImage->uploadHeight; parm[1] = ( float )h / pot; parm[2] = 0; parm[3] = 1; qglProgramEnvParameter4fvARB( GL_VERTEX_PROGRAM_ARB, 0, parm ); #endif qglProgramEnvParameter4fvARB( GL_FRAGMENT_PROGRAM_ARB, 0, parm ); // window coord to 0.0 to 1.0 conversion parm[0] = 1.0 / w; parm[1] = 1.0 / h; parm[2] = 0; parm[3] = 1; qglProgramEnvParameter4fvARB( GL_FRAGMENT_PROGRAM_ARB, 1, parm ); // DG: brightness and gamma in shader as program.env[4]. TODO: shader gamma for GLSL if ( r_gammaInShader.GetBool() ) { // program.env[4].xyz are all r_brightness, program.env[4].w is 1.0/r_gamma if ( !isPostProcess ) { parm[0] = parm[1] = parm[2] = r_brightness.GetFloat(); parm[3] = 1.0f / r_gamma.GetFloat(); // 1.0f / gamma so the shader doesn't have to do this calculation } else { // don't apply gamma/brightness in postprocess passes to avoid applying them twice // (setting them to 1.0 makes them no-ops) parm[0] = parm[1] = parm[2] = parm[3] = 1.0f; } qglProgramEnvParameter4fvARB( GL_FRAGMENT_PROGRAM_ARB, PP_GAMMA_BRIGHTNESS, parm ); } // #3877: Allow shaders to access depth buffer. // Two useful ratios are packed into this parm: [0] and [1] hold the x,y multipliers you need to map a screen // coordinate (fragment position) to the depth image: those are simply the reciprocal of the depth // image size, which has been rounded up to a power of two. Slots [3] and [4] hold the ratio of the depth image // size to the current render image size. These sizes can differ if the game crops the render viewport temporarily // during post-processing effects. The depth render is smaller during the effect too, but the depth image doesn't // need to be downsized, whereas the current render image does get downsized when it's captured by the game after // the skybox render pass. The ratio is needed to map between the two render images. parm[0] = 1.0f / globalImages->currentDepthImage->uploadWidth; parm[1] = 1.0f / globalImages->currentDepthImage->uploadHeight; parm[2] = static_cast<float>( globalImages->currentRenderImage->uploadWidth ) / globalImages->currentDepthImage->uploadWidth; parm[3] = static_cast<float>( globalImages->currentRenderImage->uploadHeight ) / globalImages->currentDepthImage->uploadHeight; qglProgramEnvParameter4fvARB( GL_FRAGMENT_PROGRAM_ARB, PP_CURDEPTH_RECIPR, parm ); // // set eye position in global space // parm[0] = backEnd.viewDef->renderView.vieworg[0]; parm[1] = backEnd.viewDef->renderView.vieworg[1]; parm[2] = backEnd.viewDef->renderView.vieworg[2]; parm[3] = 1.0; qglProgramEnvParameter4fvARB( GL_VERTEX_PROGRAM_ARB, 1, parm ); } in draw_common.cpp. note the TODO sections. biggest problem right now is probably in R_LoadARBProgram which is not used by the GLSL backend at all since it only handles the interaction shaders atm. it does have its own version of this function but it is lobotomized and only used if for some reason a user wants to use an external shader.it is here in draw_arb2.cpp -> /* ================== GL_GetGLSLFromFile ================== */ static GLchar *GL_GetGLSLFromFile( GLchar *name ) { idStr fullPath = "glprogs130/"; fullPath += name; GLchar *fileBuffer; GLchar *buffer; if ( !glConfig.isInitialized ) { return NULL; } fileSystem->ReadFile( fullPath.c_str(), reinterpret_cast<void **>( &fileBuffer ), NULL ); if ( !fileBuffer ) { common->Printf( "%s: File not found, using internal shaders\n", fullPath.c_str() ); return NULL; } // copy to stack memory (do not use _alloca here!!! it breaks loading the shaders) buffer = reinterpret_cast<GLchar *>( Mem_Alloc( strlen( fileBuffer ) + 1 ) ); strcpy( buffer, fileBuffer ); fileSystem->FreeFile( fileBuffer ); common->Printf( "%s: loaded\n", fullPath.c_str() ); return buffer; } as you can see it is somewhat more empty than the arb shaders version, but would probably suffice for the same thing.
-
Btw. IMO it would be useful to use .dds images in regular myFM/textures folder structure, so you don't have to replicate that in dedicated dds folder. TDM doesn't use texture quality settings anyway, it's a deprecated Doom3 thing.
-
afaik .flt was a old OpenFlight model format that Jon Carmack was using to test MegaTextured terrains using idTech4 as a test bed, I don't think anyone ever used it, besides him. Never tested it but is probably broken and doesn't work, like MD3 also didn't worked initially.
-
Hi folks, and thanks so much to the devs & mappers for such a great game. After playing a bunch over Christmas week after many years gap, I got curious about how it all went together, and decided to learn by picking a challenge - specifically, when I looked at scripting, I wondered how hard it would be to add library calls, for functionality that would never be in core, in a not-completely-hacky-way. Attached is an example of a few rough scripts - one which runs a pluggable webserver, one which logs anything you pick up to a webpage, one which does text-to-speech and has a Phi2 LLM chatbot ("Borland, the angry archery instructor"). The last is gimmicky, and takes 20-90s to generate responses on my i7 CPU while TDM runs, but if you really wanted something like this, you could host it and just do API calls from the process. The Piper text-to-speech is much more potentially useful IMO. Thanks to snatcher whose Forward Lantern and Smart Objects mods helped me pull example scripts together. I had a few other ideas in mind, like custom AI path-finding algorithms that could not be fitted into scripts, math/data algorithms, statistical models, or video generation/processing, etc. but really interested if anyone has ideas for use-cases. TL;DR: the upshot was a proof-of-concept, where PK4s can load new DLLs at runtime, scripts can call them within and across PK4 using "header files", and TDM scripting was patched with some syntax to support discovery and making matching calls, with proper script-compile-time checking. Why? Mostly curiosity, but also because I wanted to see what would happen if scripts could use text-to-speech and dynamically-defined sound shaders. I also could see that simply hard-coding it into a fork would not be very constructive or enlightening, so tried to pick a paradigm that fits (mostly) with what is there. In short, I added a Library idClass (that definitely needs work) that will instantiate a child Library for each PK4-defined external lib, each holding an eventCallbacks function table of callbacks defined in the .so file. This almost follows the idClass::ProcessEventArgsPtr flow normally. As such, the so/DLL extensions mostly behave as sys event calls in scripting. Critically, while I have tried to limit function reference jumps and var copies to almost the same count as the comparable sys event calls, this is not intended for performance critical code - more things like text-to-speech that use third-party libraries and are slow enough to need their own (OS) thread. Why Rust? While I have coded for many years, I am not a gamedev or modder, so I am learning as I go on the subject in general - my assumption was that this is not already a supported approach due to stability and security. It seems clear that you could mod TDM in C++ by loading a DLL alongside and reaching into the vtable, and pulling strings, or do something like https://github.com/dhewm/dhewm3-sdk/ . However, while you can certainly kill a game with a script, it seems harder to compile something that will do bad things with pointers or accidentally shove a gigabyte of data into a string, corrupt disks, run bitcoin miners, etc. and if you want to do this in a modular way to load a bunch of such mods then that doesn't seem so great. So, I thought "what provides a lot of flexibility, but some protection against subtle memory bugs", and decided that a very basic Rust SDK would make it easy to define a library extension as something like: #[therustymod_lib(daemon=true)] mod mod_web_browser { use crate::http::launch; async fn __run() { print!("Launching rocket...\n"); launch().await } fn init_mod_web_browser() -> bool { log::add_to_log("init".to_string(), MODULE_NAME.to_string()).is_ok() } fn register_module(name: *const c_char, author: *const c_char, tags: *const c_char, link: *const c_char, description: *const c_char) -> c_int { ... and then Rust macros can handle mapping return types to ReturnFloat(...) calls, etc. at compile-time rather than having to add layers of function call indirection. Ironically, I did not take it as far as building in the unsafe wrapping/unwrapping of C/C++ types via the macro, so the addon-writer person then has to do write unsafe calls to take *const c_char to string and v.v.. However, once that's done, the events can then call out to methods on a singleton and do actual work in safe Rust. While these functions correspond to dynamically-generated TDM events, I do not let the idClass get explicitly leaked to Rust to avoid overexposing the C++ side, so they are class methods in the vtable only to fool the compiler and not break Callback.cpp. For the examples in Rust, I was moving fast to do a PoC, so they are not idiomatic Rust and there is little error handling, but like a script, when it fails, it fails explicitly, rather than (normally) in subtle user-defined C++ buffer overflow ways. Having an always-running async executor (tokio) lets actual computation get shipped off fast to a real system thread, and the TDM event calls return immediately, with the caller able to poll for results by calling a second Rust TDM event from an idThread. As an example of a (synchronous) Rust call in a script: extern mod_web_browser { void init_mod_web_browser(); boolean do_log_to_web_browser(int module_num, string log_line); int register_module(string name, string author, string tags, string link, string description); void register_page(int module_num, bytes page); void update_status(int module_num, string status_data); } void mod_grab_log_init() { boolean grabbed_check = false; entity grabbed_entity = $null_entity; float web_module_id = mod_web_browser::register_module( "mod_grab_log", "philtweir based on snatcher's work", "Event,Grab", "https://github.com/philtweir/therustymod/", "Logs to web every time the player grabs something." ); On the verifiability point, both as there are transpiled TDM headers and to mandate source code checkability, the SDK is AGPL. What state is it in? The code goes from early-stage but kinda (hopefully) logical - e.g. what's in my TDM fork - through to basic, what's in the SDK - through to rough - what's in the first couple examples - through to hacky - what's in the fun stretch-goal example, with an AI chatbot talking on a dynamically-loaded sound shader. (see below) The important bit is the first, the TDM approach, but I did not see much point in refining it too far without feedback or a proper demonstration of what this could enable. Note that the TDM approach does not assume Rust, I wanted that as a baseline neutral thing - it passes out a short set of allowed callbacks according to a .h file, so language than can produce dynamically-linkable objects should be able to hook in. What functionality would be essential but is missing? support for anything other than Linux x86 (but I use TDM's dlsym wrappers so should not be a huge issue, if the type sizes, etc. match up) ability to conditionally call an external library function (the dependencies can be loaded out of order and used from any script, but now every referenced callback needs to be in place with matching signatures by the time the main load sequence finishes or it will complain) packaging a .so+DLL into the PK4, with verification of source and checksum tidying up the Rust SDK to be less brittle and (optionally) transparently manage pre-Rustified input/output types some way of semantic-versioning the headers and (easily) maintaining backwards compatibility in the external libraries right now, a dedicated .script file has to be written to define the interface for each .so/DLL - this could be dynamic via an autogenerated SDK callback to avoid mistakes maintaining any non-disposable state in the library seems like an inherently bad idea, but perhaps Rust-side Save/Restore hooks any way to pass entities from a script, although I'm skeptical that this is desirable at all One of the most obvious architectural issues is that I added a bytes type (for uncopied char* pointers) in the scripting to be useful - not for the script to interact with directly but so, for instance, a lib can pass back a Decl definition (for example) that can be held in a variable until the script calls a subsequent (sys) event call to parse it straight from memory. That breaks a bunch of assumptions about event arguments, I think, and likely save/restore. Keen for suggestions - making indexed entries in a global event arg pointer lookup table, say, that the script can safely pass about? Adding CreateNewDeclFromMemory to the exposed ABI instead? While I know that there is no network play at the moment, I also saw somebody had experimented and did not want to make that harder, so also conscious that would need thought about. One maybe interesting idea for a two-player stealth mode could be a player-capturable companion to take across the map, like a capture-the-AI-flag, and pluggable libs might help with adding statistical models for logic and behaviour more easily than scripts, so I can see ways dynamic libraries and multiplayer would be complementary if the technical friction could be resolved. Why am I telling anybody? I know this would not remotely be mergeable, and everyone has bigger priorities, but I did wonder if the general direction was sensible. Then I thought, "hey, maybe I can get feedback from the core team if this concept is even desirable and, if so, see how long that journey would be". And here I am. [EDITED: for some reason I said "speech-to-text" instead of "text-to-speech" everywhere the first time, although tbh I thought both would be interesting]
- 25 replies
-
- 3
-
-
[2.13] Interaction groups in materials: new behavior
stgatilov replied to stgatilov's topic in TDM Editors Guild
Moved this topic from development forums, since it covers a potentially important behavior change in 2.13. Luckily, its important is countered by the rarity of such complicated materials. I hope that this change has not broken existing materials. And even if it had broken any, we will be able to fix them manually... -
Fan Mission: A Winter's Tale by Bikerdude (2024-11-30)
Dead Rat replied to nbohr1more's topic in Fan Missions
Thanks Bikerdude, a very cosy little mission. I'd like to see more charitable outings like this where the player steals from the rich to distribute amongst the poor. -
Detection of player, bodies, ropes, etc. by AI depends a lot on how well-lit the object is. The player case is the most important one. It uses its own system to compute lightgem value, which renders additional views on GPU. Even with many improvements over the years, this approach costs a lot in performance, so it cannot be used for all objects which AIs can see. That's why there is another system for computing light values for the other objects. Since the engine was closed source when TDM was originally created, this system is very approximate. It takes into account light intensity and size, but mostly ignores the other factors like shadows, light projection/falloff textures, light texture transforms, blinking lights, lights with some material stages disabled etc. A new light estimate system has been implemented, and it is going to be default in 2.13. In fact, it has already been default since dev17104-10844. As far as I know, it accurately takes into account all the properties that our lights have. It has its own issues of course. It relies on a few samples on the object surface, and it is rather performance-intensive because it casts shadow rays all other the world. I had to distribute its work over several frames to reduce performance cost, so now it takes some delay to compute the proper value (which can also cause problems sometimes). Here is how it works on a test map: What it means for us? On the good side, you should no longer have a problem when you put a corpse into a dark spot under a table but guards still detect it easily as if the table did not exist. On the bad side, the new light values are inherently different, so guards might detect objects in situations where they did not detect them before or vice versa. Also the delay can probably cause some issues e.g. guards acting upon an obsolete value which is updated just afterwards. The effort is tracked in 6546. There are several cvars about this: g_lightQuotientAlgo: value = 0 uses the old system, value = 1 uses the new system. g_showLIghtQuotient: enables debug visualization like in the video above. g_les*: lots of internal parameters to tweak the system (you should not mess with them).
-
And for some pointers, read the following topic: https://forums.thedarkmod.com/index.php?/topic/22533-tdm-for-diii4a-support-topic/
-
If OBJ works correctly in TDM, it should be a recommended format, at least over ASE. Even though both are text-based, OBJ is much more lean, and it compresses faster and better. This is a high-poly cube (24k polygons) saved and compressed in both formats: I haven't tested OBJ with 2.11 yet, but ASE has other downsides, like it has to be exported from a clean scene, and the scene material library can only have materials that are used for a particular model. OBJ should just use what's been assigned to it, and it should use model pivot point instead of world/scene origin point (0,0,0). If that's true, then exporting single models should be much easier now. You won't need a separate clean scene do do your exports.
-
I remember one case where smoothing groups were a pain to transfer to TDM correctly: And I looked into TDM code which preprocesses LWO models. Of course it usually works fine, but it is not a reliable solution. If you encounter a model which is broken by this import code, you will have to workaround it on your side, either by tweaking the model, or by tweaking the exporter. Also, I believe I saw a large LWO model (terrain) which took a lot of time to load because of this preprocessing. OBJ format is very good because it supports multi-patch texture atlas and smooth groups topologically. There is zero preprocessing on TDM side: you'll get exactly what is written in the OBJ file. And loading large model is much faster despite OBJ being text format. Anyway, if someone wants to maintain a wiki article about the formats, then I think it would be very helpful to list all the working export tools. Names, versions, instructions and forum threads, known caveats, etc.
-
Just need to say this This was not true at all for me, when I modeled for Doom 3 at lest, all my lwo models worked fine, including smoothing, thou I use Modo 601 that was the exact modeling tool Seneca a model artist from idSoftware used and I also used his custom Doom 3 exporter pluging for Modo that cleans up and prepares .lwo files to Doom 3 prior to exporting. This is what I do in Modo 601 to export clean lwo files, perhaps helps those using Blender as well. 1 - put model origin at world 0,0 and move model geometry if necessary so its origin is at the place you want, like the model base. 2 - (optional) clean geometry using some automatic system, if available, so your model doesn't export with bad data (like single vertices floating disconnected). 3 - bake all model transforms, so rotation, translation and scale are all put at clean defaults (that will respect any prior scaling and rotation you may have done) 4- (in modo smoothing works this way) go to the material section and if you want your model to be very smooth (like a pipe for example) set material smoothing to 180%, you can set smoothing per material. If you want more smoothing control, you can literally cut and paste the faces/quads and you will create smoothing "groups" or islands that way. 5 - use or select a single model layer and export that (the engine only cares for a single lwo layer) if you want a shadow mesh and collision mesh as well, put them all on the same layer as the main model. 6 - never forget to triangulate model before export (Seneca plugin did that for me automatically) I did this and all my models exported fine and worked well ingame. Thou .lwo was invented by lightwave and Modo was developed by former lightwave engineers/programmers, so perhaps their lwo exporter is/was well optimized/done compared to other 3D tools.
-
Only other upgrade for me lately was a 52" 4k tv for the gaming machine . got it for cheap from a guy who was moving in with his fiance. supports HD as well and it is a 144hz tv so quite good though the size is a bit of a problem as it actually takes up the the entire length of my adjustable gaming desk meaning i had to place the PC on the floor which im not happy about as it draws in dirt like you wouldnt believe (my poor vacum cleaner is getting a workout!!!). i might upgrade the mainboard + cpu and ram to a ryzen 5800x3d and ddr4 as i seen how the 3070 does in my buddies PC which is a ryzen 3900x in a asrock taichi mainboard and his pc actually gives the 3070 enough of a boost that i will consider it (well i kinda did build it for him though he payed for the components for it). hees also using a 3070 now after we tried with mine. though my gamer will have the 2080 ti instead of the 3070 i kinda suspect it wont do any worse than the 3070 performance wise especially not in 4k which it allready handles pretty well with my old cpu/mainboard. the 2080 ti is pcie 3.0 though whereas the 3070 is pcie 4.0 so i might try taking it to him to test in his pc before i make a deciscion.