Wednesday, December 10, 2014

Unit 14: Busy to the last possible minute

Although some folks had attributed a problem with running PHP script prepared in TextEdit, I switched to GEdit, which comes with the software suite in Ubuntu and I still had no luck. I was aware that TextEdit had a nasty habit of adding the ".rtf" extension to all files, but such was not the case with GEdit. I have no idea where I've gone wrong. All of this material is stuff I wanted to master and I would be satisfied even if I didn't master it so long as I could get things to run and I had a pretty good run of "luck" or "ability to follow *some* instructions" before hitting the PHP section. I do have a couple weeks over break to review everything and attempt to do the LAMP part over on my Dell laptop, running Windows. They always say that the importance of an experiment is the ability to replicate it. I will be the first to admit that there were several times this semester where I think I finally got stuff to work on a fluke. Looking back at the form of documentation I created in terms of screenshots, there were a lot of shots done when I was going well and few when I came up against challenges. I have indeed taken steps to get away from earlier, long held reactions of getting emotional when I ought to double down on being rational. The topic I would most like to explore is where on earth did I go wrong with this PHP stuff? What is it about this particular script (I have never seen before) that made it such a problem?
Still, I feel like I went far in actually using Linux, opening directories, apt-getting applications and running them, navigating with some degree of dexterity I have never had. Now on to the final and off to work.

Tuesday, December 9, 2014

Unit 13: Hitting a brick wall

During our work in Excel at the end of November, I had to install Yosemite to update my OS in order to run the most current versions of Office and Sigil. If I load up Yosemite, I will lose a ton of old (not free) software I use very frequently. I'm going to have to buy a whole bunch of new software to update those apps (Adobe Creative Suite, which I have delayed in updating since the end of my design career, but that I use on an almost weekly basis and I won't do the subscription versions). I'm not really budgeted to make a major software purchase until January. This is an alarming situation. I don't know what I can do for a workaround.

Regarding coding, I definitely want to learn more about PHP and using it to query our database and get it to put out answers was a very practical project. I definitely want to tame if not master PHP. So far all I've done is PHP in this class and NetBeans which seems to be Java-based, but man, generally with coding you have to cross your t's and dot your i's and if you make the slightest error, it won't run. That said, when you can compile code and it works, it's rewarding. Then you can copy and paste parts of code you know works and customize it here and there.
I was really impressed with the Excel providing those customized lines of code for each ISBN number and if my paste special function were working the right way, I could do it automatically.

Since writing and compiling code doesn't happen in a WYSIWYG environment, (like html with a nice editor where you can just about see your changes on the fly) you might be mystified for a while before identifying where you made a mistake. Yeah, a compiler is a crazy and very opaque black box and you realize how instantaneous feedback can propel a project. When you can see your changes in something like real time you can really move quickly and without that quick feedback from the system you're kind of in the dark.

Monday, November 17, 2014

Unit 12 Project Management

The readings by Mark Keil was very interesting, to say the least. He talks about a software project that took up the better part of a decade  and was not completely functional before finally getting the ax. Due to project factors, psychological factors, social factors and organizational factors there were many reasons why this project was allowed to linger on. I guess a very large part of this was that the team involved and its leader had developed many good products in the past before attempting to roll out the CONFIG project. CONFIG was supposed to assist sales people in making deals with preferred clients, which can be tricky as sales folk do like to throw around discounts for preferred clients but they definitely need a range to work in so as not to undersell the company (and their own commissions).

Probably the biggest problem this writer can see with the CONFIG project was that it was not interoperable with the same company's other successful and well accepted system, the Price Quotation System (PQS). I must point out that since this project was happening in the 1980s, maybe software developers were not as attuned to the notion of developing software in interoperable suites. But now we have a very different business model emerging regarding software versus the 80s when a standalone might have had some appeal, but I don't think a standalone that was incapable of interoperability with the same company's products might have been a good idea. Nevertheless the company in the study was blindsided by their own track record of success.

As best as I understand, the current ideal model in software development and deployment is not creating an application that the developer sells like books or magazines to customers, it is creating an online web service where users or subscribers can access a whole suite of tools which are continuously upgraded as a web service. In terms of industrial models this is about the difference between designing and manufacturing a locomotive and developing and managing an entire railroad. So the current model of web services is far more vast and encompassing than the old model of software publishing.

So according to Keil, part of the reason for failure of the CONFIG project was that the team involved and its leader had a solid track record of success with the company. As development of CONFIG got bogged down, the team and the company threw more money at the project, something Keil calls project escalation, but nowadays might be called "doubling down," which may or may not be related to "doubling down on a busted flush."

Also the CONFIG project manager, Tom Jones, was very popular within the company and had a fantastic reputation for success, so the company was able to procure the resources Jones requested. The company also thought that if CONFIG ever went live, there would be a huge payoff, plus it had already sunk plenty of resources into R&D, so maybe a little more effort would push the project to completion and its payday. The notion of pulling the plug on the project when it seemed "so close to completion" seemed like dumping so much investment down the drain at the cusp of success.

It was only after two huge blows came to the company that management reviewed and reconsidered the CONFIG project. These were the death of the project manager, Tom Jones and a huge downturn in the software market at the end of the 1980s. Only then, after the better part of a decade and countless (mythical) "man-months" had been expended did the tap of resources get turned off.

Other good readings on project management were all of the assigned Frank Cervone articles, as he seems adept at project management but also goes out of his way to make his advice relevant to librarians. Cervone has developed a formula for risk assessment which is weighted by the degree of criticality of the function that would be lost should said misfortune strike as well as the actual likelihood of such disaster striking, this was entirely novel to this reader. Cervone stresses that the best risk avoidance strategy is a high degree of communication throughout the project team and the organization, something I can attest to based on my own experience in software development projects.  He couples this with utilizing a flexible model (i.e. anything other than the traditional dependency-heavy "Pipeline" model). Some of Cervone's alternative models include the spiral model and iterative prototyping. Cervone's continual use of examples from his many library projects adds more validity to his articles as well.

I think the instructor had a few words on the nature of project management being no walk in the park and from the projects I have worked on, there almost always seems to be some set of problems that can never be foreseen. However, having a plan, especially one that can be modified within reason is a critical part of the puzzle. It is clear that the traditional "Pipeline" project management model is not flexible enough for the contingencies (and client/project owner deliberation and changing specs on the fly) so a number of recent models have come out, all of them seem to be variations on a flexible model of pipeline where there is some degree of managing both the many dependencies of programming as well as accommodating the vagaries of the client. Waterfall seems a good way to manage the dependencies, agile techniques like spiral, XP and iterative prototyping seem to do well in handling ever-changing client specifications. Let's just say that the "clean room" model can never be used in any project where the client can change their mind after the project has commenced. A whole batch of permutations between waterfall and more agile models seems to be the way that project managers and project management theorists have dealt with both factors, but no one model seems to have become dominant.

Tuesday, November 11, 2014

Unit 11: Trial by ordeal

When I began this course, I was allergic to the command line. Okay, well that's what I would tell people so I could avoid it, but even then I wanted a degree of familiarity with the shell whereby I could at least navigate around to where I needed to go, open directories, run executables. I didn't even understand what configuring meant before starting this class actually I hardly knew what the LAMP server setup was about when I read online materials for the DigIn certificate about four years ago while investigating SIRLS. But it seemed impressive as an accomplishment. What's funny is when I first moved to Tucson in 2007 my immediate goal was to do a series of Flash projects thinking that there was a future in Flash, but a series of jobs involving large collections of paper and digital items and a series of questions about production bottlenecks led me to thinking long and hard about enrolling in a program in the technological aspects of library studies. I was very fortunate to have a very good school in that field right here in Tucson, a city that frequently underwhelms me in most fields of endeavor not related to the U of A.
As always, the conceptual stuff was easier for me, but the practical part is still really difficult and frustrating, but at least the frustration isn't irrational or completely emotional like it was before. I have now done things in the CL environment, had I had those skills two years ago I would have gotten an IT trainee position in the same library where I am now just a late-night info desk guy. I think my interest in MySQL and my desire to learn whatever I can about databases has motivated a deeper interest in wanting to get better at using Linux and comprehending php. I guess I needed a functional model in my mind. Previous experience with the CL left me wanting to avoid it at all costs, but I knew I'd have to use it if I wanted to use the remaining components of the LAMP stack. Another thing that was really useful for me understanding how the LAMP configuration work in the framework of the dynamic web (a.k.a. Web 2.0) was the first 8 and a half minutes of Prof. Fulton describing it in a video in my IRLS 504 core class that gelled it in my mind. in the run of this class I finally got a laptop, I'm going to have to invest in a second download of Ubuntu for that machine and try to set this LAMP stack up on that machine to be able to work on it in a more flexible fashion (like anytime after 8pm) than my current setup of a desktop in a room that has been taken over by my toddler child. Yeah, that would have been a much better situation than what this has shaken out so far, but live and learn, I guess. But if I can create a database like the photographer one, but for my own purposes it would all be very worthwhile.

Tuesday, November 4, 2014

Unit 10: Databases P.2 (Electric Bugaloo)

SQL seemed a lot easier to learn than Linux, its syntax is more like the kinds of things human beings say to one another (human readable) for the most part. Last week I committed an EPIC FAIL in posting some tables which were extremely flawed because I had accessed the Mostafa tutorials by means of googling Mostafa MySQL instead of accessing it through UACBT/VTC and doing it through GOogle was not the same as doing it through the VPN, so I was unable to download his movies after Section 2. This week I viewed all of last week's Mostafa videos and learned that I had not really normalized my data correctly, so it's gonna be the pre-fab images folder all the way for me, I guess. The data set I had in mind was a lot less complicated than what we have here and I couldn't really figure out a Primary Key for that. If this was the first time I discovered something I posted was completely messed up I wouldn't mention it, but now I will have a trail of online posts that make me look like an idiot, only a week after I posted them. Good thing the internet is so malleable that nobody will ever see that ;)
So the hardest concept for this week was the table joins, I think you join tables in expanding "your net" in looking for query results? Mr. Mostafa just about lost me when he started using single letter abbreviations for aliases in his commands.

In answer to a question in the assignments, I have a hunch it would be easy to transcribe requests to edit data in Webmin into the MySQL command line because Webmin seems to replicate in both syntax and semantics MySQL commands.
Hm, one real challenge for me was attempting to summon the Webmin, this whole thing of starting up  Webmin via the command line and then firing it up in a web browser can make your head spin from time to time, but we did it a couple of times and the second time I was able to do it successfully just from my notes and no needing to google it. Also when starting Webmin there was some message in the code about "No super cow powers" I would like to find out more about that (You can read about  it here, I guess it's Ubuntu/apt versus aptitude thing , not Webmin that is the source of super cow powers:
http://unix.stackexchange.com/questions/92185/whats-the-story-behind-super-cow-powers
).
Oh man, you can tell I've been working with the command line in SQL too long when I make my end parentheses on another line.
I am realizing sometimes it is easier for me to "get" things in review than when say, Joshua Mostafa is lecturing about it on the first go 'round. Like when he fires up MySQL in subsequent videos, I was able to write down the commands he used more easily than when he introduced those commands.

Tuesday, October 28, 2014

Unit 9: Databases Part I



The most difficult part for me this week was learning the system of notation in the ERDs and trying to develop a sensible ERD for my own database. That said, I’m really, really excited to learn about databases and relational databases which I first heard about in the summer of 2006 but had no real idea about until recently. At that time I was working for a plastic surgeon’s office populating a Cumulus visual database with before and after pictures of his variety of procedures (URL available upon request). I think the manager was going to create this database and administer the website with Drupal, another application I am just now getting to know in a functional manner.
A few weeks ago while emailing a friend who is a retired programmer (retired at 33 because he made a fortune) and he told me “It’s all about the db's, man!” I am realizing that yes, it is all about the db's and pretty much any web service or social media site is going to be entirely driven by users accessing servers which call up content from databases (and I guess the content in the database is managed by a CMS). So this is the stuff we will really want to get to know if we are going to be useful. So that's a real motivator.
Oh another hard part was figuring out where I had downloaded MySQL because I now have three servers in Ubuntu-land and had run the apt-get command from the virtual server, but for some reason the application downloaded to Virtual Machine #1.
No wait, the hardest part and something I'm not sure if I have fully comprehended this correctly was the  Third Normal Form. I get the first two forms, and I comprehend the concatenation thing of folding two rows into a single entity. However, it seems that a lot of people have been trying to explain a lot about databases because you can find a LOT of clips on Youtube attempting to explain parts of databases.  Oh yeah, I worked as a contractor as a project coordinator at Oracle from November of 2006 to February of 2007, I would have hoped to have picked up database knowledge by osmosis, but I guess that doesn't happen, not even if you've seen Larry's helicopter land on the campus. And no, they didn't play "Ride of the Valkyries" over the PA while that happened. However I stand firm, the Third Normal Form was the hardest thing for me. Maybe Fred Coulson can set it to music? Well I just ran a search at Youtube and got over 10 pages of results for "Third Normal Form" and most of them actually do look like they are referring to RDBMS's, (as opposed to being about dogs on skateboards or cats on pianos) so I know what I'll be doing tonight...

Tuesday, October 21, 2014

Unit 8: Technology Planning



This week’s readings were broad and varied widely. Some of the subject matter was familiar to me having taken IRLS 674 Managing in the Digital Environment, so Don Sager’s article on Environmental Scanning looked a lot like the SWOT analysis material covered in 674. Michael Stephens’ article on Technoplans vs. Technolust seemed sensible and although technolust got equal billing, it only accounted for a small portion of the reading, which was actually good because technolust, although it is a real trap organizations with limited budgets or limited needs can fall into, is a minor problem in the big scheme of things.
The Bertot reading on federal funding for library tech upgrades was really informative and moved the agenda into real nuts and bolts as to how LSTA funds are disbursed through state libraries (whose existence was heretofore unknown to me) and local libraries submit their technology plans to United Services Administration Company and how libraries are currently dealing with the LSTA program.
In Whittaker and company’s “What Went Wrong?...” we got an analysis from a business consultant regarding technology projects that don’t make the final cut and why they failed. Her analysis seemed to ring true with my own experiences where she sees project failure happening due to poor planning, weak relevance of the change to business mission, and lack of management support. I honestly think that her assessment of 31% failure rate might be a little on the charitable side, but when technology changes go wrong, they are pretty public, but I’m sure that there are many smaller technology changes which fail but can get swept under the rug. Otherwise, a lot of what Whittaker et al say seems right on the money. The excuses of unrealistic planning seem valid including underestimating training time for new tech as well as her focus on undelivered products from third party vendors, a huge gripe in some projects I have worked on.
Gwen Gregory’s “From Construction to Technology” article was another good nuts and bolts summary about how LSTA affects libraries and how LSTA differs from the previous LSCA. It seemed helpful when I read it but doesn’t really stand out a couple days later.
Eric Chabrow’s State of the Union was focused on organizations which had done technology upgrades and had suffered for their efforts. It was an easier read than the “What went Wrong” article by Whittaker, but was in the same vein, but focused on government agencies in the “Security Sector." If your job is hunting bank robbers or tracking terrorists or making sure corporate malefactors pay their fair share of taxes, having to deal with technology issues or a rough transfer in technology can only be an additional burden to a tough job. Chabrow makes a couple of really good points about business, technology and government. In the enterprise sector, if a transition seems to be failing, a manager will pull the plug and probably do so at the first sign of trouble. In government, where results are not as accountable to “audit culture” a project that is flagging can be kept on life support indefinitely. Chabrow’s great advice is, “If you’re going to fail, fail fast” i.e. don’t prolong the agony, just pull the plug.
Chabrow also takes a good look at what projects in peril can do regarding the toll they can take on management, he points out that rough business transitions can lead to a huge out-migration of staff and management, but he also points out that a project taken up by several managers (who have to be brought up to speed after the project has begun) can be a kiss of death to projects that might have succeeded had the manager initiating them stayed on. I can tell you from experience, having a new manager come in, one who does not know the daily problems of a department will have a hard time developing credibility with his staff. Chabrow also scores in talking about issues with third party contractors in projects and the frictions that will arise from having a project run by teams with differing perspectives and differing needs. As far as articles on planning technological change and organizational change, the Chabrow reading was the best.
In Dugan’s Information Technology Plans, we get down to the nitty-gritty of actually writing a technology plan, the kinds of evaluation a library will need to do (including assessment of the environment as Sager pointed out) to get that technology plan together.  His best piece of advice: “A question that should be continuously answered is: why is information technology necessary to fulfill this need? Each response should be outcomes based.”
The OCLC reading on Environmental Scanning could have been put higher in the stack (at an earlier position) because many main points in it were already made but it was also showing its age as being created at the beginning of the Web 2.0 era, but it was right on when talking about the trends of self-service in the library, patrons being satisfied with less if they were not aware of other information options that a librarian could provide them and most of all the trend toward a seamless information environment (although in the 11 years since this scan was made, some libraries have caught on to the usefulness of social media).
The Gerding and MacKellar piece is probably one of the most practical pieces from this unit. It made the best argument for a library having a technology plan and also seemed to guide the reader through all the steps to getting a modern conduit of funding for a library to acquire the technology to reach its goals. If I knew someone from a library or cultural memory institution that was up and coming and looking for a way to get their technology plan initiated, I would probably recommend that they read this article first. Some of the great advice offered here includes organizing collaborative efforts with like-minded organizations as funders take grant proposals with partners more seriously and that having a technology plan in place provides potential funders with proof that the organization seeking technology funding is serious with concrete plans and funders will react positively to organizations which have already determined how they will use their technology. The Gerding and MacKellar article then describes the kinds of technologies that were trending wen it was written, varieties of grants available from the Institute for Museum and Library Services have made available through state libraries and summarizes with success stories of libraries and a dense distillation of tips for success.