Jump to content


Photo

Fixing the TDM build system

[censored]

  • Please log in to reply
33 replies to this topic

#26 stgatilov

stgatilov

    Advanced Member

  • Development Role
  • PipPipPip
  • 586 posts

Posted 31 December 2017 - 07:34 AM

I was the last one to mess with Linux build.

I'll update build instructions for Linux after 2.06 release.

 

As of 2.06 and SVN, you can build Linux version from source code root directory by running:

scons BUILD="release" TARGET_ARCH="x64" -j6 ..

If you omit BUILD setting, it'll simply build some default configuration (debug, I believe).

If you omit TARGET_ARCH, then it'll build 32-bit.

If you omit ".." at the end, then you will only get deploy-style binaries in the current directory (without debugging info). If you add "..", then you will also get development-style binaries in "../darkmod/".

The ".." argument is the only non-intuitive thing now: scons needs it to be able to write outside of the current directory.

 

 

I usually use CMake at my work for building C++ projects on Windows.

However, adding it to TDM would make build more complicated for Windows coders. And tweaking MSVC-specific parameters would be harder (yes, there are such tweaks).

Also, GCC and MSVC builds are different in too many ways, it would be very hard to write a single build process on CMake for both platforms.

MSVC is not too bad after all, if you apply most of the changes to props-files instead of vcxproj, and if you have a habit of keeping vcxproj file clean.

 

The directories linux/win32/win64 are somewhat messy now. They could benefit from some cleaning.

However, the very idea of keeping all the dependencies in SVN is quite good for Windows platform.

 

The dream of moving from SVN to Git is surely caused by The Grand Github Hype  :D   But I think it will never happen to TDM.

Assets are simply too large, so they will stay in SVN forever. Having two different version control systems for a single project is a stupid thing to do.

Plus: Git is order of magnitude more complicated and less user-friendly than SVN.

There are a lot people here who are not programmers at all, and they use TortoiseSVN without much complaints. If you ask them to use Git, you will get into serious trouble  :laugh:

 

The boost thing is no longer needed to build TDM.

I hope some day tdm_update will be refactored to not depend on boost, then we can finally remove boost from SVN and be happy.

 

 

As for the docker thing.

I don't understand yet what is the benefit of it.

Here is what I see in your docker file:

RUN apt update \
  && apt upgrade -y \
  && dpkg --add-architecture i386 \
  && apt update \
  && apt install -y build-essential \
                    m4 \
                    mesa-common-dev \
                    python-dev \
                    libbz2-dev \
                    scons \
                    subversion \
                    libc6-dev-i386 \
                    g++-multilib \
                    libx11-dev:i386 \
                    libxxf86vm-dev:i386 \
                    libopenal-dev:i386 \
                    libasound2-dev:i386 \
                    libxext-dev:i386 \

The problem of building and installing 32-bit stuff on 64-bit Linux will be gone after TDM 2.06, because we have 64-bit version now.

The other commands in the first 4 lines are just some basic things which everyone should know if they install anything on Linux.

The last command says to install a set of packages.

I agree that we should maintain the minimal set of required packages somewhere. But Wiki or readme.txt should be enough for it.

 

Is it right that we can create Docker VM with some Linux properly configured to build and run TDM, and then we can deploy this VM both to Linux players and Linux developers, and they can simply run everything from the Docker container on their Linux, even if it is different version/distribution/architecture/whatever?

What about native libraries then? For example, will everyone use the same OpenGL or OpenAL library, regardless of the hardware they have? 

 

 

One last note about automatic builds.

I really like the idea of doing automatic builds of TDM on all platforms. Also, it would be great to automatically run TDM, start some default mission and check that TDM does not crash =)

It would solve some problems we have now, like Linux build being broken for months without anyone to care  :o

However, it needs some serious administration efforts.


  • Anderson likes this

#27 demilich666

demilich666

    Member

  • Member
  • PipPip
  • 16 posts

Posted 31 December 2017 - 12:59 PM

You're right in that the Dockerfile itself doesn't do anything particularly clever, but the output of it is a docker 'image'.  This image can then be run as a container (the 'VM') and it contains everything you need to build your project.  What this means is nobody then has to worry about setting up their PC with all the required dependencies, etc.  They don't have to even think about it.
 
I included the Dockerfile/docker image build to illustrate the entire process.  Normally a 'user' (whether it's a real person or an automated CI process) doesn't have to worry about that part.  Usually someone will create this Dockerfile and build the image.  They then publish the image to Dockerhub.  Then you, as a user, pulls (downloads) the image to use.  I haven't published an image, but if I had it would look like so:

docker pull darkmod/darkmod-build

Then you build the project using that image using 'docker run':

docker run <a bunch of arguments> darkmod/darkmod-build <some build command>

The 'advantage' is that the build environment is isolated from anyone's PC and is immutable.  It will never change, ever, and will work every single time. For everybody.  No matter what Linux distro they have running, or what state their OS is in.

 

Also - you don't even have to do the 'docker pull' - if the image in the 'docker run' command doesn't exist on your local filesystem docker will pull it automatically.

 

Github:

 

I'm sure you've all discussed the pros and cons of switching and I don't mean to rehash all of it here - just if code repo size was a blocker for it I would be happy to help look into possible solutions is all... :-)

 

I've used SVN longer than I've used Git, and I've been involved in a few real-world projects where they were migrated, and it's never a problem.  Yes, it can be complicated but it doesn't have to be.  I am no expert myself - I basically stick to push and pull, and you can merge via the Github UI, etc.

 

Docker:

 

So primarily I was suggesting using Docker as a build tool - not for running the game.  Containers are primarily used for server applications or command-line tools.  Having said that, it is possible to run UI apps in containers (I've run browsers in containers before - e.g. Selenium).  I think you could use the X Window system in the container or use something like VNC server.  I would have to look into this, but would be happy to do it if you think it would be useful - by the sounds of it it would be.

 

The scenario you mention (running the container on any distro) is indeed what you can do with Docker - and why it's so useful.  You can even do this with Windows or Mac (although only running Linux containers).  Windows containers exist now, but I have no experience at all with it so can't really say what's possible there.

 

Automated builds:

 

If we can use a free CI system then the 'administration efforts' would be minimal - especially if we use Docker.  That's one of the reasons I'm suggesting it - to pave the way for stuff like this.

 

One final note in case it's not obvious - using Docker for builds does NOT replace the current build system (scons or whatever).  That all stays put.  It just runs the current build command in the container.  It's the environment that changes. not the process.


Edited by demilich666, 31 December 2017 - 01:22 PM.

  • Anderson likes this

#28 stgatilov

stgatilov

    Advanced Member

  • Development Role
  • PipPipPip
  • 586 posts

Posted 01 January 2018 - 02:03 AM

So docker is just a Virtual Machine (you may call it lightweight, but I think in general case it can get as heavy as a full VM).

Anyone can create a docker image and put it somewhere, it can exist as an unofficial thing (like the unofficial installer), or be added to the assets SVN --- it does not matter.

I think it should not be made official, because I'm afraid no one would maintain it, and it would soon become a yet another "obsolete way to build TDM on Linux".

One of the reasons why Linux build instructions are so bad is that currently every active developer uses Windows as main OS  :mellow:

 

We usually use VirtualBox VM on Windows to build and run Linux version of TDM.

This approach perfectly works for building TDM, and barely works for running it.
VirtualBox uses a discontinued project for OpenGL support in Linux guest. It is enough to at least run the game in TDM. But I'm afraid it will die at some moment due to graphics modernization.
I'm pretty sure that Docker won't solve the OpenGL problem better than VirtualBox does.
 
Now the question is: how is Docker better than VirtualBox?
Running Linux containers on Windows is possible, but it seems to work via virtualization too (some Windows version of Docker even uses VirtualBox, I believe).
Why not simply create a VirtualBox Linux VM and distribute its VHD to everyone?
This VHD can be used to build TDM on Linux, either manually or automatically. It cannot be used to play the game, but Docker won't help with it too.
 
So I'm trying to understand who would benefit from having a Docker image.
The players surely won't: they download binaries and run them, they don't need to build TDM, and even if they do build, playing the game is much more important for them.
The developers need to build and also run the game (even if it is hardly playable). I'm afraid running TDM in Docker will stop on "OpenGL not initialized" error. Is it possible to build binary inside Docker and than to run it outside Docker on arbitrary Linux environment?
Also, developers need to debug the game. I have used the crazy ddd tool twice to fix crashes. Someone else will use another tool. Will developer be able to add such a tool to prebuilt Docker image? I believe that immutability is not a benefit but a burden for an ordinary developer.
So perhaps administrators can benefit from Docker. We can run autobuilds in the Docker environment. But they can be run in a native Linux as well. Yes, Docker can make it easier and more reproducible, I suppose.
 
The build environments are different, and TDM can build on one machine and fail to build on another one. In such case it is better to fix compatibility, so that it builds on both machines. Sticking to Docker means sticking to single Linux environment and stopping to care for the other cases. This is OK for Windows world (e.g.: we support only MSVC2013), but I believe this is not the default way of doing things in the Linux world.
 
 
Please keep in mind that 1) I am quite a pessimistic person and 2) I don't know Docker  :D
Maybe instead of discussing Docker, you can just create an image and share it with other Linux (and non-Linux) guys. If it gains some popularity, it would be much easier to see whether it is a welcome addition or an unnecessary thing.
 
P.S. Added issue for build instructions update.

  • Anderson likes this

#29 demilich666

demilich666

    Member

  • Member
  • PipPip
  • 16 posts

Posted 01 January 2018 - 10:24 AM

OK - I've pushed my build image to dockerhub: https://hub.docker.c.../darkmod-build/

 

There is a short blurb on there of how to run it.  All you need to do is install docker and run that command.  It only covers the simple case of a release build - no sense trying to cover every single use case at the moment.

 

Don't worry if you're pessimistic - I don't blame you.  If it makes any difference, I do this sort of thing for a living and I wouldn't dream of doing any sort of Linux build any other way than this.

 

 

 

 Is it possible to build binary inside Docker and than to run it outside Docker on arbitrary Linux environment?

 

Yes absolutely - this is what I'm in fact proposing.  If you look at that command to run the darkmod build, it 'bind-mounts' (the -v argument)  the source code directory from the host inside the container and then runs the build.  When the build stops, the container exits and the newly built binaries will be there on your host filesystem, with no trace left of the container.

 

 

 

The build environments are different, and TDM can build on one machine and fail to build on another one

 

But why does this happen in the first place?  It's because of inconsistent environments - the very thing that Docker fixes.  Correct me if I'm wrong, but you can build for x86-64 on Ubuntu and the resulting binaries will run on Fedora right?  If so, it doesn't matter what the build image uses for its OS.  Now if the game runs on one machine and not the other, this is a different story (the runtime environment).  Even if you did want different Linux build environments, all you would need to do is create a Docker image for each one.  Again - this is where an automated CI system would help...

 

Also to be clear, containers are NOT VMs.  I only used that term because it's the best way to describe it to someone who is not familiar with them.  They run a single process only (not an entire OS), and their resource usage is limited to whatever resource the process running inside the container is using.  There is lots of stuff on Google describing the differences.  Basically a running container will just use whatever resources on the host that the process needs.  Yes, there is overhead, but it's almost negligible.  A key thing to understand about how they can do all this without being a true VM is that they also share the host operating system's kernel.  Another way to think of it is a running container uses only the binaries from the image, but the resources and kernel of the host.  A traditional VM allocates and reserves fixed CPU, memory and disk resources, as well has having a much higher overhead for the hypervisor.

 

Regarding running Dark Mod in Docker:

 

Let's keep this to just builds for now - I don't want to promise something that I can't do.  I haven't looked into doing this and I don't know if it will work.  Using a VM such as Virtualbox isn't a bad idea at all - and if it works then great.  However I don't have any visibility of what you guys have been doing so I'm not aware of any problems you might have.  If you need a way to share a VM image then I would use something like Vagrant.  Again, if Docker is an option here - it uses less resources and startup time is way faster than using a traditional VM.

 

Having said all that, some preliminary searching looks sort of promising.  One thing you can do with Docker is share hardware resources with the host.  For example graphics cards, sound cards, the X11 display, and I/O devices.  This is all done via bind-mounts of the /dev/xxx devices or UNIX sockets.  Nvidia has got into the game for GPU processing tasks in datacenters: http://www.nvidia.co...-container.html (think cyptocurrency or AI computing).

 

However once we start sharing host resources then you lose some of the benefits of using a container - that is we start introducing host dependencies.  But it could end up being an easy way to share a current build for testing purposes - it would save having to send a ZIP file around - the current build would exist in Dockerhub which is a central location, and all one would have to do is pull and run the latest image.


Edited by demilich666, 01 January 2018 - 10:51 AM.

  • Anderson likes this

#30 demilich666

demilich666

    Member

  • Member
  • PipPip
  • 16 posts

Posted 01 January 2018 - 12:06 PM

Just had a quick look at some current hosted CI services.  There is a lot of good stuff out there that is free for open source projects.  Most of them are Linux/Docker/Github-centric which would be fine if not for the Github thing, but I found one that seems to tick all the boxes for Windows:

 

https://www.appveyor.com

 

  • free for open source
  • Windows / Visual studio build environments
  • supports public Subversion repositories

Some more about their build environments: https://www.appveyor...ld-environment/

 

Only thing I'm not sure about is would the environment have everything required for building TDM?  They have DirectX SDK and Boost - is there anything else it needs?

 

Another option for Windows might be Team Services, but it's hard to discern what the limitations are - looks like it's free for up to 5 people and might have to use Git (or TFS) as well: https://www.visualst.../team-services/


Edited by demilich666, 01 January 2018 - 12:08 PM.

  • Anderson likes this

#31 stgatilov

stgatilov

    Advanced Member

  • Development Role
  • PipPipPip
  • 586 posts

Posted 02 January 2018 - 09:19 AM

I have tried to run your docker image on a fresh Ubuntu 16.04 inside VirtualBox.

 

Installing Docker is relatively easy, although not as easy as 'sudo apt-get install docker'.

The docker image for Darkmod build was pulled from the hub, it has compressed size about 200 MB.

It has successfully built 64-bit TDM in single-threaded mode.

Running with "-j6" failed (something like cc1plus crash), maybe it is caused by VirtualBox.

 

I tried to run the resulting TDM executable outside of Docker, and failed with the following error:

./thedarkmod.x64: error while loading shared libraries: libopenal.so.1: cannot open shared object file: No such file or directory

I had to install libopenal1 to fix it. Then it runs successfully.

I think this library is needed by TDM itself: if you download TDM via tdm_update, it will most likely print the same error on a fresh Ubuntu.

 

I have updated the compilation instruction for 2.06.

You can see the expected contents of COMPILING.txt in 2.06 here.

As you see, for Ubuntu distribution you only have to:

sudo apt-get install scons m4 subversion mesa-common-dev libxxf86vm-dev libopenal-dev libxext-dev

After that you can build TDM directly using scons command line.

 

So the Docker won't simplify life for contributors with Ubuntu 16.04.
On the other hand, a different version of Ubuntu may need additional steps.
Different distribution may require some nontrivial changes, as seen in the current instruction for Debian on the wiki.
It seems that having one Ubuntu-based Docker image would help users of Debian, CentOS and Fedora to build TDM.
Also, having a single Docker image should greatly reduce a problem when Linux build is deeply broken, and you cannot even understand which errors are due to wrongly configured environment, and which happen because of the code changes.
 
One thing I am interested in is how easy it would be to run this Linux container in Docker for Windows.
If is it significantly easier than installing Ubuntu in VirtualBox, then Docker image can be helpful for Windows-only developers.
It can be used to check for build errors, although it won't not allow to run the resulting executable.

  • Anderson likes this

#32 demilich666

demilich666

    Member

  • Member
  • PipPip
  • 16 posts

Posted 02 January 2018 - 09:34 AM

OK - I managed with -j12 on my Ubuntu desktop, so the crash might have something to do with Virtualbox.

 

Here are the 'proper' installation docs here: https://docs.docker....cker-ce/ubuntu/

 

 

Docker for Windows:

 

https://docs.docker....indows/install/

 

You need to be running either Windows 10 Professional or Windows Server 2016 I think (or else you have to use the Docker Toolbox version, in which case you might as well just use Virtualbox).  I don't know how many people are running either of those.  I have a dual-boot system with Ubuntu Gnome / Windows 10 Pro, but I don't know what most of you guys are using.

 

So for Docker for Windows the default setting is to use Linux containers, but you can 'switch' it to use Windows containers.  So to answer your question it should work in the same manner as running it on Linux.  I can try it out later and see.


Edited by demilich666, 02 January 2018 - 09:34 AM.

  • Anderson likes this

#33 demilich666

demilich666

    Member

  • Member
  • PipPip
  • 16 posts

Posted 02 January 2018 - 05:25 PM

So I just tried it in Windows and....it worked :D

 

The only things to be aware of are a couple of settings you need to tweak (both in the Docker settings UI):

 

  • You need to 'share' a drive with Docker to allow the bind-mounting of directories - so just share the drive (or directory) containing your dark mod source
  • Bump up the resources dedicated to Docker.  The default for me was 2 GB RAM and 2 CPU cores.  Increasing this allowed me to run a multi-threaded build.  This is necessary in Windows because it's using Hyper-V.

 

Without the 2nd tweak, it failed for me in a similar manner as it did for you (i.e. cc1plus crash), so I suspect that was because you didn't dedicate enough cores to your VM.


Edited by demilich666, 02 January 2018 - 05:33 PM.


#34 stgatilov

stgatilov

    Advanced Member

  • Development Role
  • PipPipPip
  • 586 posts

Posted 02 January 2018 - 11:05 PM

I have Windows 10 Home. And a lot of people here use Windows 7.

So I guess Docker for Windows is mostly unusable now (and will continue to be).

And Docker Toolbox is simply VirtualBox + other stuff.

I hope it would at least run the image without any issues.






0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users