So about 1 year ago, I decided to try “something new” (at least for me) to compile Redtamarin on different platforms: Windows, macOS and Linux.
At the time, I documented the details
The basic idea was to replace Virtual Machines by hardware machines.
So here the review: did it work ? is it practicable ? etc.
Even if I did not update Redtamarin as much as I wanted, the setup does work pretty nicely.
At the time, with a Mac mini and 16GB of RAM, yes it was the good move to just add more hardware for other platforms, here the main advantages I can see:
- the isolation of hardware did allow to increase my global RAM/CPU
simply put: when something compile on the Windows machine
it does not borrow RAM/CPU from my main machine
- this is great to compile in parallel as each machine
does not share hardware ressources with one another
- remote control was as expected
from a Mac I use Microsoft Remote Desktop if I need to see the desktop and do visual stuff
and because I setup Cygwin I can also SSH to the Windows machine
setting up RDP for Linux was totally worth it as is much faster than VNC
- it’s very liberating to be able to work under Windows or Linux
and leave it “as is”, not think of it for few days, and come back to it
without having the need to reboot/restart a VM
The bad parts:
- the little Gigabyte BRIX have an Atom CPU
it is not the fastest CPU for C/C++ compilation
- under Linux with GCC it’s not a big deal
- but under Windows yeah it is slower
compared to a i5 or better i7 CPU
- the BRIX have sometimes problem discovering the Wi-Fi
so I simply use ethernet to connect them to the network
The good surprises:
- when it came to the time where I had to use code signing certificates
to sign software under Windows, it happen than now you need to use
hardware USB card reader, having real hardware for Windows helped
- on another case, I was asked to connect remotely to some secure site
using tools like Cisco VPN etc. that are mainly made for Windows,
and here too having real Windows hardware helped
Meanwhile, even if the Mac mini was a good machine, and even if all this little hardware setup allowed to extend its life, I still had to upgrade my main machine for something bigger and faster.
I now use a hackintosh as a main machine, and the mac mini is now a “satellite” like the BRIX machines, I remote control it to compile and do other stuff.
The setup look like that
hackintosh (macOS 10.12 Sierra) - 64GB RAM |_ mac mini (OS X 10.10 Yosemite) - 16GB RAM |_ BRIX (Windows 8.1) - 8GB RAM |_ BRIX (Linux Ubuntu 15.10) - 8GB RAM
For some it may look overkill but really it’s not, my goal here is to reuse as much as possible the hardware instead of throwing it away.
Now, I think one thing is still missing.
When I built the hackintosh, I planed to dual boot Windows 10 for coding or gaming (), but it happen I almost never dual boot, so yeah I want to add a BRIX with Windows 10.
The only difference is I would plan for a bigger BRIX, maybe in the range of $600 with an i7 CPU and with 16GB of RAM.
So Windows 10 is not absolutely needed for Redtamarin, but still…
It could be useful for many use cases
- publishing Adobe AIR Windows app for Windows 10
things like customised desktop tiles, etc.
different needs for the windows installer
- being able to convert .msi installer to .appx installers
with the Desktop App Converter to publish to the Universal Windows Platform (UWP)
- being able to use and test things with the “Bash on Ubuntu on Windows”
for Redtamarin shells, etc.
All in all, even if your goal is not compiling C/C++ for different platforms, if you do publish apps for different platforms having the hardware related to each platforms help enormously, I would say a $200 investment for a little BRIX or equivalent small factor desktop is worth it.
It is a bit like mobile cross-platform development, you can test so much with Android and iOS emulators, at one moment you gonna need the real hardware.
And what about VM? well … I still use those for some edge cases
even if I got a Ubuntu Linux desktop on hardware, I may install a Debian, CentOS etc. under a VM
to test specific things.
So how do you know when you need to move from a VM to real hardware?
I would say it depends mainly about what you are doing, in my case for C/C++ compilation
I needed the surplus of raw hardware ressources, you may need that if you build ANE for ex.
You also may need the real hardware for very specific tests as “a kiosk app that run 24/24”
where you want to test in “real condition” or without any external interferences.
Other use cases are things like OUYA, Apple TV, Android TV etc.
I do think you want the real hardware to test those
Bottom line it depends on how much you are ready to wait for stuff …
with only one machine and VMs you can do a bit of stuff in parallel but not everything
you could not run GTA V full screen while at the same time C/C++ code is compiling
eg. 1 machine share all of its resources,
but if you got 2 machines you can do anything (even reboot or shutdown)
because you don’t share any resources.
And here I would say, when you don’t have to wait you can work more,
if you do need to compile C++ for 20mn under Windows,
you do want to be able to use those 20mn to do something else,
like writing documentation, editing graphics/icons, etc.
When you are a small entity (indie dev, solo dev, small teams, etc.)
I think it’s pretty important to be able to scale your resources
and adding more hardware imho help you do that.