Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree with you in principle, but I have learned to accept that this only works for 80% of the functionality. Maybe this works for a simple Diablo or NodeJS project, but in any large production system there is a gray area of “messy shit” you need, and a CI system being able to cater to these problems is a good thing.

Dockerizing things is a step in the right direction, at least from the perspective of reproducibility, but what if you are targeting many different OS’es / architectures? At QuasarDB we target Windows, Linux, FreeBSD, OSX and all that on ARM architecture as well. Then we need to be able to set up and tear down whole clusters of instances, reproduce certain scenarios, and whatnot.

You can make this stuff easier by writing a lot of supporting code to manage this, including shell scripts, but to make it an integrated part of CI? I think not.



I'm curious what's a Diablo Project ? I've never heard of such technology unless you're speaking of the game with the same name.

Did you possibly mean Django ?


Argh it was indeed Django, I was on mobile and it must have been autocorrected.


While it comes up, I think it's more of a rare problem. So much stuff is "x86 linux" or in rare cases "ARM linux" that it doesn't often make sense to have a cross platform CI system.

Obviously a db is a counter example. So is node or a compiler.

But at least from my experience, a huge number of apps are simply REST/CRUD targeting a homogeneous architecture.


Unless we're talking proprietary software deployed to only one environment, or something really trivial, it's still totally worth testing other environments / architectures.

You'll find dependency compilation issues, path case issues, reserved name usage, assumptions about filesystem layout, etc. which break the code outside of Linux x86.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: