[Tfug] Version Control

Bexley Hall bexley401 at yahoo.com
Tue Mar 26 16:55:56 MST 2013


Hi Tom,

>> I.e., the folks *using* them are mostly interested in writing
>> and tracking changes to *source code* and little else.
>
> Well, which of these use-cases are you evaluating for?  git is able,

Which does a *business* decide is important?  "Let's track
the changes in the mechanical drawings for the PC's that we
manufacture and not worry about the changes in the BIOS
that runs in them"?  "Lets track the changes in the BIOS
and not worry about the changes in the schematics for the
hardware it runs on"?  etc.

If you're just writing code, then all you care about is the code.
If you're just building metal boxes, then all you care about are
the metal boxes.

OTOH, if you are "delivering systems" then you care about every
aspect of that system!

> but not the best choice for "what did my resume file look like 6
> months ago?" but it does it fine (it's just not efficient) or "let me
> back up my database" (again, doable but with drawbacks, see
> stackoverflow).  If you've got a large codebase (or set of codebases)
> and docs and related things, and you want one VC for everything, you
> have to decide what gets priority.  Or pick a VC for "code" and a VC
> (or set of backup strategies) for other data.  Collaborative document
> editing has its own set of headaches.

*Any* sharing has headaches.  A "document" is no different than
a piece of code in that regard (OK, a piece of code probably
has a far more extensive governing specification and formally
defined test suite -- whereas a document could have a three line
specification and *no* test suite:  "Looks good to me!  Send it
to the printer...")

>>> I don't know if anyone's mentioned this, but git (among others as well, of
>>> course) is fully distributed. Once you clone the repository from your
>>> server, you will still have it (including all the commits and file history
>>> and so forth) even if your server explodes. Also, you can go offline,
>>> continue working, making commits, etc, and then push that to the main
>>> repostory. It's quite nice.
>>
>> I understand.  Though colleagues I have spoken with claim this
>> to be a *drawback* -- individual developers tend to work in
>> isolation "too long" and then there is a "commit frenzy" where
>> folks are all trying to merge very different versions of the
>> *entire* repository at the same time -- just before a release.
>> I.e., because they can freely modify their copy of the *entire*
>> repository, they are more likely to make "little tweaks" to
>> interfaces, etc.  Of course, always *thinking* those tweaks
>> are innocuous... until someone else's code *uses* that
>> interface (expecting it to be in its untweaked state) and
>> finds things suddenly don't work.  Then, the blame hunt begins
>> (all while management is pushing to get something out the door).
>
> Development silos predate DVCS, based on horror stories from my
> colleagues before git was a thing (and somethings I observed myself).

With anything, "distribution" (spatial, temporal, etc.) works when
you can count on the entities (people, bits of hardware, etc.) to be
predictable and reliable (within quantifiable limits).  I.e., "play
well with others".  Witness Yahoo's work-at-home scenario...  :-/




More information about the tfug mailing list