Jump to content
The Dark Mod Forums

greebo

Root
  • Posts

    16735
  • Joined

  • Days Won

    51

Everything posted by greebo

  1. Agreed, sounds reasonable.
  2. Here's the pre-release build 3.0.0pre8, we're getting closer to the finish line. It's about time for a new major release - not just because of the changes of this build, but for all the improvements made over the last few releases. I'm going to need your help stabilising this release, with all the renderer changes things are bound to require a few tweaks. Most time since 2.14 has been spent on the renderer, which should now be faster in regular (non-lit) render mode. Lit render mode is probably not going to be faster, but at least more accurate. You can activate Shadow Mapping, and the interaction shader code has been ported over from TDM to produce the same look as the game. Starting with this release, the user settings will be saved separately for each DR version, meaning that this release won't mess with the settings of the previous versions, including keyboard shortcuts, colours, last used maps, etc. DR will try to import and use any settings of previous releases, but it won't change them. For more things that have changed or fixed, see the list below. Download Windows Portable x64: https://drive.google.com/file/d/121ibqqYMKqRqjcQ1zS0HtM4JO0ADrgRo/view?usp=sharing Download Windows Installer x64: https://drive.google.com/file/d/1209hG92chVzqOc-iaFDzJ0cBfQucDad8/view?usp=sharing Linux folks need to compile this stuff from source, instructions for various distributions are on the wiki. If you happen to run into a crash, please record a crashdump: How to record a crashdump Changes since 2.14.0 can be seen on the Bugtracker changelog, here's the summary: #5787: Feature: Add "Create Particle" to right-click orthoview drop-down menu #219: Feature: Shadow mapping support #5761: Feature: Cut functionality to complement copy and paste #5757: Feature: Ability to center 3D camera on selected entity #5927: Feature: Save user settings by application version #5848: Feature: MD5 Animation Viewer: show current frame & total frames #5849: Feature: MD5 Animation Viewer: jump to frame #5905: Improvement: Safeguard warning against Loss of Layering #5872: Improvement: Option to filter skins out of search results in the Choose Model dialogue #5909: Improvement: Revisit Interaction Shader to get closer to the TDM looks #5822: Improvement: UI tweaks for worldspawn-to-entity conversion #5873: Improvement: Entity inspector should recognise spawnargs beginning with "sprS_" as def spawnargs #5825: Improvement: Allow absolute paths for snapshots #5910: Improvement: Entity Inspector: classname field should always be read-only, to force use of the "Choose entity class" button #5925: Fixed: Objective GUI doesn't display properly in some places #5919: Fixed: Crash on loading certain maps #5829: Fixed: Entity inspector shows inherited spawnargs of previous selection #5853: Fixed: DR overwrite order for defs is different from TDM's #5897: Fixed: X/Y and Camera View bindings don't save properly #5858: Fixed: "Replace Selection with exported Model" sets classname to "func_static". #5864: Fixed: Map -> Edit Package Info (darkmod.txt)... crashes DarkRadiant #5846: Fixed: Rotating a func_static result to random stretch textures #5840: Fixed: DR crashes when syncing with remote Git repository #5847: Fixed: Switching visibility of Github repo from public to private causes crash #5841: Fixed: Dockable window layout doesn't save new floating XY views #5844: Fixed: "Choose skin..." button on custom model spawnargs shows skins for main model spawnarg #5826: Fixed: Entity inspector considers inherited colors black #5885: Fixed: ReloadDefs moves def_attached light crystals to entity origin #5901: Fixed: .lin files can't be opened if different case than .map name #5884: Fixed: Model chooser radio box selection issue #5836: Fixed: Changing multiple lights between omni/projected resets colours to black Changes since 3.0.0pre1 #5934: Selection overlay is z-fighting on patches #5932: ForceShadows materials are not casting shadows #5933: Moving brushes doesn't update the scene in lit render mode Changes since 3.0.0pre2 #5941: Selected Skin not showing in ModelSelector #5935: Defs takes longer every time #5939: Texture tool Free rotation not showing anymore #5940: Light diamond frequently disappears on colour change until it's moved again #5938: Additive blend stages over black diffusemap are z-fighting #5936: Ambient lights don't render properly in lighting preview mode Changes since 3.0.0pre3 #5949: Fixed: DR crash with combination of mouse buttons pressed #5948: Fixed: Manipulation Vertex Dots are hard to see #5947: Fixed: Git Sync Exception: too many redirects or authentication replays #5907: Feature: Allow way to hide some entities in Create Entity list #5946: Improvement: Speaker radii should be transparent #5945: Improvement: Light diamonds should be transparent again #5937: Fixed: Sound radius spheres don't always update #5943: Fixed: Brush manipulation is laggy in huge maps #5942: Fixed: Missing brushes when opening alphalabs1 from vanilla Doom 3 PK4s Changes since 3.0.0pre4 Reduced Frame Buffer Count from 3 to 1, this should reduce RAM consumption a lot #5953: Wireframe object drawing order is changing between sessions #5951: "Hide Deselected" is slowed when there's a lot of patches present in the scene #5950: Visibility checks are slowing down front-end render pass Changes since 3.0.0pre5 #5955 + #5956: Fixed: Player start entity is invisible in 3.0.0pre5 #5959: Geometry Corruption / weird diagonal lines messing up the view Changes since 3.0.0pre6 #5966: Light entity radious colour changes as you pan the camera around. #5964: Cannot manipulate func_emitter after creation #5965: Resizing light entites via light_radius in property inspector broken #5963: More Geometry Corruption in Camera View (Lighting Mode) #5960: Crash in MD5 model viewer Changes since 3.0.0pre7 #5968: origin of player start entity misaligned #5969: Cannot snap selected patch vertices to grid Thanks for testing, as always!
  3. I assume Shift-MMB (PasteTextureProjected) might give you better results, if a brush happens to be nearby. It will do the same projection as Cap texture, but using the given plane of the brush face. The Cap Cycle Texture command is using the fixed X, Y and Z planes for the projection, without the need to read it from a brush face.
  4. https://bugs.thedarkmod.com/view.php?id=5929
  5. And "Natural" would not do the job?
  6. Oh, I can see that now. It has been removed in version 2.1.0, sometime in 2016. It has been removed since "Natural" is doing a similar job, or pasting projected from adjacent brushes. What's the exact use case of Cycle Cap that cannot be achieved with the other functions?
  7. Which function are you referring to? And in which version has it been removed from DarkRadiant?
  8. I didn't mean to suggest to move or migrate dmapping code into DR (I already failed once doing so), and you're correct that keeping DR up to date with any dmap changes would be most cumbersome. That's why I just meant to load a dmap.dll module and pump data into it, which would make it spit out some portalling or proc information. Pretty much like the game is using the Maya SDK. It just needs to load the DLL and feed the right data format into it to be able to use it.
  9. Not only the version control system, also the data structures and coding paradigms are like from two different planets. I could merely use the engine code as rough blueprint, and it was immensely helpful for me, I could learn a lot about more modern openGL. Speaking about sharing code, what would be really nice would be to have a plugin containing the DMAP algorithm. But from what I remember when trying to port this over to DarkRadiant years ago, that code is also tied to the decl system and the materials and even image loading. Maybe @stgatilov, having worked on dmap recently, might share some insight on whether this piece of code would be feasible to isolate and move to a DLL with a nice interface. Having leak detection and portalisation code available to DarkRadiant would be beneficial for renderer performance too. Right now, it's completely unportalised and slow as heck.
  10. I managed to port some of the shadow mapping code over to DR, it can now support up to 6 shadow-casting lights. Of course, it'll never be as pretty as TDM, and nowhere near as fast, but it's a start.
  11. The GUI text placement code is definitely not perfect, I've been reverse-engineering it back in 2010. If you have a concrete example, I can look into it and compare to it to the TDM sources, which we have available in the meantime.
  12. It's the way they are internally stored to reduce draw calls, but they are indeed similar. I implemented the IWindingRenderer first, since that was the most painful spot, and I tailored it exactly for that purpose. The CompactWindingVertexBuffer template is specialised to the needs of fixed-size Windings, and the buffer is designed to support fast insertions, updates and (deferred) deletions. I guess it's not very useful for the other Geometry types, but I admit that I didn't even try to merge the two use cases. I tackled one field after the other, it's possible that the CompactWindingVertexBuffer can now be replaced to use some of the pieces I implemented for the lit render mode - there is another ContinuousBuffer<> template that might be suitable by the IWindingRenderer, for example. It's very well possible that the optimisation I made for brush windings was premature and that parts of it can be handled by the less specialised structures without sacrificing much performance. The model object is not involved in any rendering anymore, it just creates and registers the IRenderableSurface object. The SurfaceRenderer is then copying the model vertices in the large GeometryStore - memory duplication again (the model node needs to keep the data around for model scaling). The size of the memory doesn't seem to be a problem, the data is static and is not updated very often (except when scaling, but the number of vertices and indices stays the same). The thing that makes surfaces special is their orientation, they have to be rendered one after the other, separated by glMultMatrix() calls. Speaking about writing the memory allocator: I was quite reluctant to write all that memory management code, but I saw no escape routes for me. It must have been the billionth time this has been done on this planet. Definitely not claiming that I did a good job on any of those, but at least it doesn't appear in the profiler traces.
  13. Yes, this is interesting. It's achievable, with some cost, of course. Right now, the Shaders themselves implement the interfaces IWindingRenderer, IGeometryRenderer and ISurfaceRenderer. A different authority could implement these interfaces, but it needs to map the objects to the Shaders somehow (likely by using a few std::maps). The renderer then calls that authority to deliver that information, this way we can separate that information. The fullbright backend renderer needs that info when processing the sorted shader passes. Currently they ask their owning shader to draw its surfaces, this has to be moved elsewhere. The lighting mode renderer is using the objects as delivered by the render entities, this is not involving the Shader doing the housekeeping. So this renderer is already heading more towards this direction.
  14. I suspect it's all about the draw calls. In fullbright mode DR is now invoking much fewer GL calls compared to lit render mode. There's not so much difference when it comes to the oriented model surfaces, these are really almost the same (and are using the same vertex storage too), here it's the brushes and patches. Lit mode is grouping by entity - to regain the advantage of submitting everything in one go, it would need to dissolve that grouping information, which has to happen every frame. I think this is going to be too taxing. Maybe when lit mode is more optimised, we can try to merge the two modes. You've made a correct observation about the backend data storage though: the geometry and winding renderers are (at the moment) not sharing their vertex data between the two modes, memory is duplicated and copied around often. That's not good. The main reason for this duplication is the chronological order I adjusted the renderer. I was chewing by this starting with fullbright mode, first brushes, then patches, then models, finally the visual aids like lines and points. After that I was moving forward to do the research on lit mode, and all that reflects in the code. I admit that I took this approach on purpose: when starting, I didn't have the full grasp of what is going to be necessary, I had to learn along the way (and aim for not getting burnt out half-way through). Now the full picture is available, the thing can be further improved, and the storage is probably among the first things that need to be optimised.
  15. First of all, thanks for taking the time to respond, it has been getting wordier than I anticipated. Yes, the distinction is in the shaders now. There is still a possibility to distinguish these two, since the VolumeTest reference provides the fill() check, so some onPreRender() methods are reacting to this and prepare different renderables. It's still necessary at this point, since some wireframe renderables are calling for a different appearance. This doesn't mean that it can't get any simpler though. The objects are still calling for a coloured line shader, like <0 0 1> for a blue one. In principle, now that the vertex colour is shipped along with the geometry data, the colour distinction in the shader itself is maybe not even necessary anymore. There could be a single line shader, used to draw stuff in the orthoview.
  16. Lighting Mode Approach All EntityNodes are registering themselves in the RenderSystem on scene insertion. Same for the Lights, they register themselves as RenderableLight objects, so the OpenGLRenderSystem knows about every IRenderEntity and each RendererLight. The frontend phase is using the same onPreRender() call. All scene nodes are aware of their parent IRenderEntity (this had already been done before), which enables them to attach IRenderableObjects to their IRenderEntity. This way every entity knows about the renderable objects it is a parent of. During back end rendering, the algorithm makes use of the IRenderEntity::foreachRenderableTouchingBounds() method, to select those objects that are intersecting with a light's volume. IGeometryStore Every IRenderableObject has its vertex data stored in the central IGeometryStore owned by the render system. It doesn't know where that is exactly, but it will receive a Slot Handle to be able to update the geometry or remove it later. The IRenderableObject::getStorageLocation() method will expose the storage handle and enables the back end renderer to access the data by object. The geometry store handles two RAM buffers that are protected by glFenceSync, in preparation of moving all that data to a GPU buffer and not running the risk of altering the data while it's still in use. The number 2 can be increased to a higher number if needed (TDM is using 3 of them). Changes and updates to the geometry buffer are recorded during the front-end render pass and are propagated to the secondary buffer when switching between frames, to keep the amount of vertex data copied around reasonably low. The back end is currently processing all the IRenderableObjects one by one, it will use the same glDrawElementsBaseVertex call for every encountered object (so there's room for optimisation here, possibly bunching the calls together and then using glMultiDrawElementsBaseVertex). Windings Windings are special again, and not very optimised as of yet. BrushNodes don't do anything, it's the Shader (in their role as WindingRenderer) that is grouping the windings per entity and clustering them into one large IRenderableObject per entity. It is likely to intersect with far too many lights in the scene, so there's room for using a flexible space partitioning system here. Geometry and Surfaces The base implementations provide a convenient attachToEntity() method which takes care of the bureaucracy. The nodes just need to call it with the correct IRenderEntity* argument. Backend I tried to use the TDM renderer as blue print. There's a dedicated RenderSystem::renderLitScene() method which is called by the CamWnd when in lighting mode. The steps are (see render/backend/LightingModeRenderer.cpp): For every known light, check each known entity and intersect the objects Every intersecting object will produce an interaction, objects are sorted by entity and material All collected objects will be considered for the depth fill pass: only the suitable materials provide that pass Interaction Pass: draw the objects per light and entity, using the correct GLSL program Blend Pass: draw all other passes like blend stages or skyboxes. The cubemap program needed to render skyboxes has been implemented using GLSL. It doesn't handle reflective stages yet, only regular cubemaps. Results Everything is still pretty rough and not optimised yet, but it's working. Particle rendering, skyboxes, blend stages and regular light interactions are properly showing up, so it's at least at the same feature level before the changes, which was what I've been aiming for in this branch.
  17. Fulllbright Approach The front-end render pass is reduced to a single call: onPreRender(const VolumeTest&) - when invoked, each node has the ability to check for any updates that have been happening since the last frame, like material changes or changed texture coordinates, new target lines. Nodes are no longer submitting renderables to the collector. Instead, they grab a reference to the Shader from the RenderSystem (like before), and attach their geometry to it. The geometry will stay attached to the shader until it is updated or removed by the Node during a future onPreRender call or if it's removed from the scene. Shaders provide a specialised API for the most common use cases: an API for brush windings (IWindingRenderer), an API for general purpose geometry (path boxes, target lines, vertices, quads) called IGeometryRenderer and an API for triangulated, oriented surfaces (models) called ISurfaceRenderer. The Nodes will not know how the shader is dealing with their data, but they will receive a numeric Slot Handle that will allow them to update or remove their geometry later. The above IWhateverRenderer implementations are designed to internally combine as many objects as possible. No distinction between Orthoview rendering and Camera rendering (renderWireframe and renderSolid are gone). It's all about the shaders, they know whether they are suitable for rendering in one of these view types, or both. The Shader implementation provide a drawSurfaces() method that is invoked by a shader pass during the back end rendering phase. This will set up the glEnableClientState() calls and submit the data through glDrawElements. Windings To achieve fewer draw calls, all windings of a given size (more than 90% of the faces have 4 vertices) will be packed together into a single CompactWindingVertexBuffer that stores all windings of that material into a single large, indexed vertex array. Winding removal and re-addition is fast the buffer will keep track of empty slots and is able to re-fill them quickly with a new winding of the same size. Index generation is using a templated WindingIndexer class that is creating indices for GL_LINES, GL_POLYGON and GL_TRIANGLES. It is up to the Shader to decide which indexing method is used, orthoview shaders are using GL_LINES, while camera preview is using GL_TRIANGLES. Every winding is specified in world coordinates. Geometry This is the API used by patches, entity boxes, light volumes, vertices, etc. Objects can choose the GeometryType they are rendering: Lines, Points, Triangles and Quads. The Shader will internally sort the objects into separate buffers for each primitive type, to submit a single draw call for all the objects sharing the same type. All Geometry is using world coordinates. Surfaces This API is similar to the Geometry API, but here no data is actually submitted to the shader. Instead, IRenderableSurface objects are attached to the shader, which provide a getSurfaceTransform() method that will be used to set up the model matrix before submitting the draw calls. Surface vertices are specified in local coordinates. Highlighting The shader API provides an entry point to render a single object when it is selected. This is going to be much slower than the usual draw calls, but the assumption is that only a small portion of all map objects is selected at the same time. Vertex Storage While the data is now stored in the shader, it's still in the main RAM. No VBOs have been used yet, that would be a logical next optimisation step. Results With the above changes, the amount of draw calls in a fairly sized map when from 80k down to a few hundred. While the first attempts of combining the brushes doubled the frame rate of my benchmark map (using the same position and view angles, drawing it 100 times), this later went down to a 30% speed improvement after migrating the model surfaces. It turns out that rendering the models using display lists is really fast, but it violated the principle of moving the calls to the backend. It has to be taken into account that after the changes, the vertex data is still stored the main memory, not in the VBO.
  18. I'm opening this topic to summarise the technical changes that have been made to DR's renderer and get some feedback from my fellow coders. I'd love to get a peer review on the code changes, but going through that by looking at a pull request of that renderer branch would be a terrible experience, I assume, so instead I'd like to give an overview over what is done differently now. General things to know about DR's renderer DarkRadiant needs to support three different render views or modes: orthographic view, editor preview (fullbright) and lighting preview. Each of them has very different needs, but the lit preview is the most complex one, since it ideally should resemble what the TDM engine is producing. Apart from the obvious things like brush faces and model geometry, it needs to support drawing editor-specific things like path connection lines, light volumes, manipulators (like the rotation widget) or patch vertices. Nodes can be selected, which makes them appear highlighted: they display a red overlay and a white outline in the camera preview, whereas the orthoview shows selected item using a thicker red dashed line to outline selected items. DarkRadiant cannot specialise its renderer on displaying triangles only. Path lines for instance are using GL_LINE_STRIPs, Single brush faces (windings) are using GL_POLYGON for their outline (triangulation of brush faces in the ortho view or the camera (when selected) introduce a lot of visual noise, we just want the outline), patches want to have their control mesh rendered using GL_QUADS. Model surfaces (like .ASE and .LWO models) on the other hand are using GL_TRIANGLES all the way. Almost every object in DarkRadiant is mutable and can change its appearance as authors are manipulating the scene. CPU-intensive optimisations like generating visportal areas is not a likely option for DR, the scene can fundamentally change between operations. The Renderer before the changes DR's rendering used to work like this: all the visible scene nodes (brushes, patches, entities, models, etc.) were collected. They have been visited and were asked to forward any Renderable object they'd like to display to a provided RenderableCollector. The collector class (as part of the frontend render pass) sorted these renderables into their shaders (materials). So at the end of the front end pass, every shader held a list of objects it needed to display. The back end renderer sorted all the material stages by priority and asked each of them to render the objects that have been collected, by calling their OpenGLRenderable::render() method. After all objects rendered their stuff, the shader objects were emptied for the next frame. Culling of invisible objects has been happening by sorting objects into an Octree (which is a good choice for ortho view culling), some culling has been done in the render methods themselves (both frontend and backend calls). The problems at hand Doing the same work over and over again: it's rare that all the objects in the scene change at once. Usually prefabs are moved around, faces are textured, brushes are clipped. When flying through a map using the camera view, or by shifting the ortho view around, the scene objects are unchanged for quite a number of frames. Separation of concerns: every renderable object in the scene has been implementing its own render() method that invoked the corresponding openGL calls. There were legacy-style glBegin/glEnd rendering (used for path nodes), glDrawElements, glCallList, including state changes like enabling arrays, setting up blend modes or colours. These are render calls that should rather be performed by the back end renderer, and should not be the responsibility of, let's say, a BrushNode. Draw Calls: Since every object has been submitting its own geometry, there has been no way to group the calls. A moderately sized map features more than 50k brush faces, and about half as many patch surfaces. Rendering the whole map can easily add up to about 100k draw calls, with each draw call submitting 4 vertices (using GL_POLYGON). Inconsistent Vertex Data: since each object was doing the rendering on its own, it has been free to choose what format to save its data in. Some stored just the vertex' 3D coordinate, some had been adding colour information, some were using full featured vertices including normal and tangents. State Changes: since every object was handled individually, the openGL state could change back and forth in between a few brush windings. The entity can be influencing the shader passes by altering e.g. the texture matrix, so each renderable of the same material triggered a re-evaluation of the material stage, leading to a massive amount of openGL state changes. Then again, a lot of brushes and patches are worldspawn, which never does anything like this, but optimisation was not possible since the backend knew nothing about that. Lighting mode rendering: Lighting mode had a hard time figuring out which object was actually hit by a single light entity. Also, the object-to-entity relationship was tough to handle by the back end. Seeing how idTech4 or the TDM engine is handling things, DR has been doing it reversed. Lighting mode rendering has been part of the "solid render" mode, which caused quite a few if/else branches in the back end render methods. Lighting mode and fullbright mode are fundamentally different, yet they're using the same frontend and backend methods. The Goals openGL calls moved to the backend: no (frontend) scene object should be bothered with how the object is going to be rendered. Everything in terms of openGL is handled by the back end. Reduced amount of draw calls: so many objects are using the same render setup, they're using the same material, are child of the same parent entity, are even in almost the same 3D location. Windings need to be grouped and submitted in a single draw call wherever possible. Same goes for other geometry. Vertex Data stored in a central memory chunk: provide an infrastructure to store all the objects in a single chunk of memory. This will enable us to transition to store all the render data in one or two large VBOs. Support Object Changes: if everything should be stored in a continuous memory block, how do we go about changing, adding and removing vertex data? Changes to geometry (and also material changes like when texturing brushes) is a common use-case and it must happen fast. Support Oriented Model Surfaces: many map objects are influenced by their parent node's orientation, like a torch model surface that is rotated by the "rotation" spawnarg of its parent entity. A map can feature a lot of instances of the same model, the renderer needs to support that use-case. On the other hand, brush windings and patches are never oriented, they are always using world coordinates. Unified vertex data format: everything that is submitted as renderable geometry to the back end must define its vertex data in the same format. The natural choice would be the ArbitraryMeshVertex type that has been around for a while. All in all, get closer to what the TDM engine is doing: by doing all of the above, we put ourselves in the position to port more engine render features over to DR, maybe even add a shadow implementation at some point.
  19. That's a nice comparison in that talk, and some points made are very interesting. Some of them are perfectly applicable to TDM editing, while others not (like the keyboard shortcuts example - I guess most editors provide customisable hotkeys). The func_instance is a nice concept that might be applicable to prefabs. If it weren't for the map file structure getting into the way this might have been worthwile investigating, but I assume this not easy to accomplish while maintaining compatibility with the .map format.
  20. That case is handled, it doesn't create an encompassing group if this outer group already exists.
  21. That checkbox is unrelated to the format the prefabs are stored in. It just groups the imported piece together as a whole, whether there are sub-groups or not. No support in TDM, at least not yet. The portable format has been introduced for two reasons: first to make it easier for other technologies to read the map format, since XML parsers are widely available. Second, to fix the problem with clipboard data losing the group info when copy/pasting stuff in and between DarkRadiant itself - when copying data to the clipboard the mapx format is used (you can easily see that when copying map parts and pasting them into a text editor like Notepad).
  22. As far as the file syntax is concerned, there's no difference between a Doom 3 .map and a Doom 3 .pfb. When DR is saving a .pfb file, no corresponding .darkradiant file is going to be created, i.e. all map meta data like layer and group info is lost. There's no difference in the XML structure of the mapx or pfbx file contents. No .darkradiant file will be created for neither of them since it's not necessary. Layer and group information is saved in the mapx/pfbx file itself. A .pfb file is using the decl-style syntax of Doom 3 maps, while .pfbx is an XML file. The XML file is taking up more space on disk than the decl-based one, if anybody likes to care about that. Yes, these two commands are different, as expected: Export selected as Map will create a regular map file without adding any additional stuff like those sealing brushes, or moving player start around, it's just for saving the selected part of the map - it's really just what the name says. The region feature has the goal of making a part (the regioned one) pass beyond the dmap flood fill phase, hence the wall brushes.
  23. Yep, when building from source you're linking the DR binaries against specific libraries that were present on your system at that time. They become dependencies - removing them will cause the DR binaries to no longer work. Just re-build and you should be fine.
  24. Yes, I'm pretty confident that recompiling DR will fix this. Did you compile from source before or did you install it from a package?
  25. Yes, that's one way. But you don't even need to overwrite the current installation, you can also run the new version side by side with the old one, using a separate folder for 2.14. That way you can switch back to the old version if you don't like it. When switching back and forth, depending on how old your current version is, you might lose a keyboard shortcut or two. (If the old version is 2.12 or 2.13, I think no shortcuts will be lost.)
×
×
  • Create New...