Monday, November 17, 2014

Unit 12 Project Management

The readings by Mark Keil was very interesting, to say the least. He talks about a software project that took up the better part of a decade  and was not completely functional before finally getting the ax. Due to project factors, psychological factors, social factors and organizational factors there were many reasons why this project was allowed to linger on. I guess a very large part of this was that the team involved and its leader had developed many good products in the past before attempting to roll out the CONFIG project. CONFIG was supposed to assist sales people in making deals with preferred clients, which can be tricky as sales folk do like to throw around discounts for preferred clients but they definitely need a range to work in so as not to undersell the company (and their own commissions).

Probably the biggest problem this writer can see with the CONFIG project was that it was not interoperable with the same company's other successful and well accepted system, the Price Quotation System (PQS). I must point out that since this project was happening in the 1980s, maybe software developers were not as attuned to the notion of developing software in interoperable suites. But now we have a very different business model emerging regarding software versus the 80s when a standalone might have had some appeal, but I don't think a standalone that was incapable of interoperability with the same company's products might have been a good idea. Nevertheless the company in the study was blindsided by their own track record of success.

As best as I understand, the current ideal model in software development and deployment is not creating an application that the developer sells like books or magazines to customers, it is creating an online web service where users or subscribers can access a whole suite of tools which are continuously upgraded as a web service. In terms of industrial models this is about the difference between designing and manufacturing a locomotive and developing and managing an entire railroad. So the current model of web services is far more vast and encompassing than the old model of software publishing.

So according to Keil, part of the reason for failure of the CONFIG project was that the team involved and its leader had a solid track record of success with the company. As development of CONFIG got bogged down, the team and the company threw more money at the project, something Keil calls project escalation, but nowadays might be called "doubling down," which may or may not be related to "doubling down on a busted flush."

Also the CONFIG project manager, Tom Jones, was very popular within the company and had a fantastic reputation for success, so the company was able to procure the resources Jones requested. The company also thought that if CONFIG ever went live, there would be a huge payoff, plus it had already sunk plenty of resources into R&D, so maybe a little more effort would push the project to completion and its payday. The notion of pulling the plug on the project when it seemed "so close to completion" seemed like dumping so much investment down the drain at the cusp of success.

It was only after two huge blows came to the company that management reviewed and reconsidered the CONFIG project. These were the death of the project manager, Tom Jones and a huge downturn in the software market at the end of the 1980s. Only then, after the better part of a decade and countless (mythical) "man-months" had been expended did the tap of resources get turned off.

Other good readings on project management were all of the assigned Frank Cervone articles, as he seems adept at project management but also goes out of his way to make his advice relevant to librarians. Cervone has developed a formula for risk assessment which is weighted by the degree of criticality of the function that would be lost should said misfortune strike as well as the actual likelihood of such disaster striking, this was entirely novel to this reader. Cervone stresses that the best risk avoidance strategy is a high degree of communication throughout the project team and the organization, something I can attest to based on my own experience in software development projects.  He couples this with utilizing a flexible model (i.e. anything other than the traditional dependency-heavy "Pipeline" model). Some of Cervone's alternative models include the spiral model and iterative prototyping. Cervone's continual use of examples from his many library projects adds more validity to his articles as well.

I think the instructor had a few words on the nature of project management being no walk in the park and from the projects I have worked on, there almost always seems to be some set of problems that can never be foreseen. However, having a plan, especially one that can be modified within reason is a critical part of the puzzle. It is clear that the traditional "Pipeline" project management model is not flexible enough for the contingencies (and client/project owner deliberation and changing specs on the fly) so a number of recent models have come out, all of them seem to be variations on a flexible model of pipeline where there is some degree of managing both the many dependencies of programming as well as accommodating the vagaries of the client. Waterfall seems a good way to manage the dependencies, agile techniques like spiral, XP and iterative prototyping seem to do well in handling ever-changing client specifications. Let's just say that the "clean room" model can never be used in any project where the client can change their mind after the project has commenced. A whole batch of permutations between waterfall and more agile models seems to be the way that project managers and project management theorists have dealt with both factors, but no one model seems to have become dominant.

Tuesday, November 11, 2014

Unit 11: Trial by ordeal

When I began this course, I was allergic to the command line. Okay, well that's what I would tell people so I could avoid it, but even then I wanted a degree of familiarity with the shell whereby I could at least navigate around to where I needed to go, open directories, run executables. I didn't even understand what configuring meant before starting this class actually I hardly knew what the LAMP server setup was about when I read online materials for the DigIn certificate about four years ago while investigating SIRLS. But it seemed impressive as an accomplishment. What's funny is when I first moved to Tucson in 2007 my immediate goal was to do a series of Flash projects thinking that there was a future in Flash, but a series of jobs involving large collections of paper and digital items and a series of questions about production bottlenecks led me to thinking long and hard about enrolling in a program in the technological aspects of library studies. I was very fortunate to have a very good school in that field right here in Tucson, a city that frequently underwhelms me in most fields of endeavor not related to the U of A.
As always, the conceptual stuff was easier for me, but the practical part is still really difficult and frustrating, but at least the frustration isn't irrational or completely emotional like it was before. I have now done things in the CL environment, had I had those skills two years ago I would have gotten an IT trainee position in the same library where I am now just a late-night info desk guy. I think my interest in MySQL and my desire to learn whatever I can about databases has motivated a deeper interest in wanting to get better at using Linux and comprehending php. I guess I needed a functional model in my mind. Previous experience with the CL left me wanting to avoid it at all costs, but I knew I'd have to use it if I wanted to use the remaining components of the LAMP stack. Another thing that was really useful for me understanding how the LAMP configuration work in the framework of the dynamic web (a.k.a. Web 2.0) was the first 8 and a half minutes of Prof. Fulton describing it in a video in my IRLS 504 core class that gelled it in my mind. in the run of this class I finally got a laptop, I'm going to have to invest in a second download of Ubuntu for that machine and try to set this LAMP stack up on that machine to be able to work on it in a more flexible fashion (like anytime after 8pm) than my current setup of a desktop in a room that has been taken over by my toddler child. Yeah, that would have been a much better situation than what this has shaken out so far, but live and learn, I guess. But if I can create a database like the photographer one, but for my own purposes it would all be very worthwhile.

Tuesday, November 4, 2014

Unit 10: Databases P.2 (Electric Bugaloo)

SQL seemed a lot easier to learn than Linux, its syntax is more like the kinds of things human beings say to one another (human readable) for the most part. Last week I committed an EPIC FAIL in posting some tables which were extremely flawed because I had accessed the Mostafa tutorials by means of googling Mostafa MySQL instead of accessing it through UACBT/VTC and doing it through GOogle was not the same as doing it through the VPN, so I was unable to download his movies after Section 2. This week I viewed all of last week's Mostafa videos and learned that I had not really normalized my data correctly, so it's gonna be the pre-fab images folder all the way for me, I guess. The data set I had in mind was a lot less complicated than what we have here and I couldn't really figure out a Primary Key for that. If this was the first time I discovered something I posted was completely messed up I wouldn't mention it, but now I will have a trail of online posts that make me look like an idiot, only a week after I posted them. Good thing the internet is so malleable that nobody will ever see that ;)
So the hardest concept for this week was the table joins, I think you join tables in expanding "your net" in looking for query results? Mr. Mostafa just about lost me when he started using single letter abbreviations for aliases in his commands.

In answer to a question in the assignments, I have a hunch it would be easy to transcribe requests to edit data in Webmin into the MySQL command line because Webmin seems to replicate in both syntax and semantics MySQL commands.
Hm, one real challenge for me was attempting to summon the Webmin, this whole thing of starting up  Webmin via the command line and then firing it up in a web browser can make your head spin from time to time, but we did it a couple of times and the second time I was able to do it successfully just from my notes and no needing to google it. Also when starting Webmin there was some message in the code about "No super cow powers" I would like to find out more about that (You can read about  it here, I guess it's Ubuntu/apt versus aptitude thing , not Webmin that is the source of super cow powers:
http://unix.stackexchange.com/questions/92185/whats-the-story-behind-super-cow-powers
).
Oh man, you can tell I've been working with the command line in SQL too long when I make my end parentheses on another line.
I am realizing sometimes it is easier for me to "get" things in review than when say, Joshua Mostafa is lecturing about it on the first go 'round. Like when he fires up MySQL in subsequent videos, I was able to write down the commands he used more easily than when he introduced those commands.