Archive

Archive for the ‘Technology’ Category

WordPress Theme Updates

February 22nd, 2009

I really have fallen in love with the inove WordPress theme. It’s clean, yet stylish… modern, yet grounded. It is nearly perfection. But for my particular needs, it is not exactly perfect. Thankfully, the source is available and appears to be released under the Collective Commons Attribution-Noncommercial-Share Alike 3.0 license. That’s handy, because this blog is under the same license.

So, this afternoon I grabbed the source, spun up a git repository for it, and started making the changes I wanted to better fit my needs. I’ve now gotten it good enough to use on my blog, which means the code needs to be “shared alike”, per the terms of the license. To that end, I invite anyone who is interested to grab the code from its github repository. Maybe the original author will take a peak at some of the option settings and incorporate them into the official version. Read more…

probonogeek Technology

Talking about ExtJS

February 22nd, 2009

In November of 2008 I traveled to frigid Chicago to attend the inaugural sprint of my company’s new content management system. It was a week long affair where we started the work of building a multi-client Merb based CMS to replace the old work horse of the company, a proprietary Zope product known as ListMonster. ListMonster had served us well for many years, but its age was beginning to show and we knew the time had come for a serious upgrade. David, as a major proponent of all things Ruby, wanted to us to either develop in Rails or a new fangled MCV system known as Merb. In the end I’m not convinced that particular decision was really all that big of a deal, as Rails and Merb as so similar that Rails 3.0 will be Merb 2.0, and Merb 2.0 will be Rails 3.0. But the Merb decision wasn’t the only big decision made that week, we also agreed to use ExtJS for all front end development. Read more…

probonogeek ExtJS

Time for a Change

February 15th, 2009

I started this blog with a tip of the hat to my earlier efforts at an online journal and a hope that Blogger would be a more permanent home than past attempts. By any objective measure, Blogger has been a complete success, clocking a total of 289 posts since April of 2005. Less prolific than, say, Wonkette’s twenty posts a day, but not too shabby for a dude whose never kept a blog longer than a year.

Recently, I’ve grown frustrated with Blogger. It’s a fine platform, and having someone else deal with the hosting is certainly a plus… but in the end, it’s a service over which I have no control. It does exactly what Google wants it to do, and nothing more. For a long time I didn’t want more… but times, they are a changing. The first hint of longing came when friends launched two new blogs with WordPress, Minor Failures and GeekBeer. Both blogs have gravatar support, an idea with which I am absolutely smitten. Then, most recently, I posted some code examples and found the Blogger support for showing that code was most disappointing. Combined with the byzantine themeing system, the inability to change the blog’s domain name, and the general need to refresh the look & feel of the site; one gets a very compelling case to switch blogging platforms. Read more…

probonogeek Technology

The Trouble with Enumerables

February 12th, 2009

Quite a lot of political postings for what’s suppose to be a technical journal recently… time for a return to our traditional values!

Today’s topic is Enumerables. Originally I thought this post was going to be about iterators, but on reflection, iterators aren’t really the trouble here… but I’m getting ahead of myself. Ruby, dynamic scripting language of MVC fame, has this Enumerable concept. What it does is takes a set of objects and performs actions on that set of objects, sometimes returning a modified set, sometimes returning a single element from the set. The available actions are known as iterators, and some of the more common ones are each, select, and collect. Read more…

probonogeek Technology

My First Rails Application

October 23rd, 2008

Last week Articulated Man launched my first solo Rail’s application, a voter’s guide for New England federal races. The site was sponsored by the New England Alliance for Children’s Health, whose site we did earlier in the year as just a standard site. The voter’s guide required a bit more functionality, thus the decision was made to develop in Rails and use it as my first stab at Rails development.

The site turned out great, mostly because we have outstanding designers who can make anything look good. I also learned quite a bit about the nuts and bolts of a Rails site, which is something you don’t really get from just reading the book.

While it’s still too soon to make any firm declarations about Rails, I will say it was very nice to have some provided structure when building the application. With LegSim, and pretty much any other project I’ve done, I had to build everything out of whole cloth… thus, LegSim is rather amorphous, having changed throughout the years and never following a clearly defined structure. With Rails you get that out of the box… perhaps more structure than I would prefer, but I think I’d prefer too much structure over too little, at least at my current stage as a developer.

Speaking of LegSim, the UW Congress course has started up again this quarter, giving me a boost of excitement to get developing again. Already some good stuff happening there as I integrate what I’ve learned since doing web development full time. Sadly, Archon and LegSim v5 have been put on hold until later, as I need to have a finished product well before either of those technologies will be ready for prime time. But some day–some day soon–LegSim will be rewritten and be better than ever!

probonogeek Technology

Chrome: Speculation

September 3rd, 2008

If you are a geek and you haven’t heard about Chrome, then you’ve been living under a rock since Monday when it was first leaked. If you aren’t a geek, your failure to notice the news is acceptable, understandable, forgivable. But now it’s on my blog, and you have no excuse, so get wise.

There are more than a handful of interesting things to say about Chrome, and none of them require me to even have tried Chrome, since it’s not yet available for Linux uses… here are each of those interesting things in no particular order.

1) The Comic Book

Google used an unorthodox approach to explaining the technology driving their fancy new browser. Instead of your standard, boring white paper, Google released a freaking comic book! It’s still a point-by-point review of the problems of current day browsers and Google’s proposed solutions, but it goes a step further with use of clever pictures to describe complex technical problems. It reminds me of an excellent video on Trusted Computing circulated years back (worth a watch if you haven’t seen it before). Now, let’s not fool ourselves, the Chrome comic book is not for the faint hearted… processes versus threads, memory footprint, hidden class transitions, incremental garbage collection… this isn’t kids stuff and certainly not for public consumption. Were it excels is communicated complex ideas to folks with a shared vocabulary but without shared expertise. I don’t develop browsers, and probably never will, but I still understood the message. A contributor to Debian Planet quipped, “I think it would be good if we had a set of comics that explained all the aspects of how computers work,” and I couldn’t agree more. I suppose that’s one advantage of having serious cash to throw around.

2) Open Source as Market Motivator

It’s my belief that Google has zero interest competing with the likes of Firefox and Internet Explorer, giants that they are… or even the lesser three: Safari, Opera, and Konqueror (being the origins of WebKitKDE for the win!). Chrome will never be as big as those browsers and Google doesn’t care. Google’s purpose, stated in various press releases, developers conference, and in the freakin’ comic itself, is to improve the ecosystem in which they operate: the web. Google wants more content online, and more users searching for that content, in order to feed the growing advertising business on which Google’s billions are based. Chrome isn’t about challenging FF or IE for market share, it is about challenging FF and IE to be better.

To accomplish these goals they have open-sourced the browser and all of its fancy doodads. Some clever things here. First, they used WebKit as their rendering engine, and as I mentioned, I love WebKit because it is based on KHTML, which was one of the first good open-source HTML renders and is still in use by Konqueror. What’s unique about WebKit is that neither FF (which uses Gecko) or IE (which uses something I will refer to simply as the suck) use it. So, here you’ve got an entire implementation of a radical new way of building a web browser, with all sorts of cool features just begging for adoption and neither of the big players have a leg up… both will have to tear out parts and re-implement based around their rendering system. And re-implement they shall! If Chrome can deliver on all of Google’s lofty promises, then users are going to gravitate to whichever browser can best deliver the same results.

3) Process vs. Threads

This is the big thing that Chrome is supposed to offer. Modern day browsers utilize tabs to allow users to visit many pages at once, which is handy… but in order to visit multiple pages like that, the browser has to be able to do many things at once. Until now, that was down with threads.

To help visualize a thread, imagine you have a fourteen year old kid and you tell him to deliver newspapers along a street. Off he goes and does his thing and he does it very well. Then, the next day, you tell the kid while he’s delivering the papers you’d also like him to compose an opera. So, he goes and delivers a few papers, and then stops and jots down a few notes, maybe a harmony or two, then back to paper delivery. He gets it done, but all that bouncing from one to another causes him to do it a bit slower. The next day you ask him to do all those things he was already doing and do your taxes (does anyone else get a cat on the second result?!). This time, when he switches over to doing your taxes, his poor little fourteen year old brain can’t handle it and the whole operation goes to hell… no papers get delivered, no opera is composed, and certainly not tax returns. That’s threading… one “person” switching between various jobs.

Now, with processes, it’s like you have THREE fourteen year old boys to do your bidding… one goes off to deliver the papers, one composes the opera, and the final does your taxes. Even if the third kid can’t deliver, his epic failure doesn’t impact the performance of the other two. You may still get audited, but at least you’ll know the papers are delivered and opera lovers can rave about the latest wunderkind.

IE and FF use threads (though, rumor on the street is that IE8 beta is process based)… so if one thread goes wonky, you probably lose the entire browser. Chrome is different, it uses separate processes for each tab, that way if one has a problem, the others aren’t impacted. If, at this point, you are saying “big deal, how often does my browser crash?” you are right where I am. I use my browser for everything all day… 10 – 15 tabs at once is standard operating procedure for me. Maybe I’m not visiting the nefarious parts of the internets. But here’s what is cool about their concept. It’s not one process per HTTP request or page fetch, it’s one process per tab/domain. Which means that so long are you are browsing around CNN.com, you operate within a single process, sharing memory for various javascript fun within that domain. But once you leave CNN.com to visit, say, nytimes.com, the old process is killed and a new one, with fresh uncluttered memory, is spawned. Which, if you don’t know much about the AJAX security model, is really a clever approach. AJAX is sandboxed by design, meaning AJAX scripts running on a page at cnn.com can ONLY talk with cnn.com servers… it cannot make a request off to washingtonpost.com or whatever… it’s all isolated. So now, when you go to gmail.com and sit there for HOURS, with its memory consuming javascript, it is all washed away the moment you move to a new domain. Now that, my friends, is good news.

Of course, it comes with a cost… those processes each need their own memory, and while it may be virtual memory at first, once they start doing a lot of writing, and you get all those page faults, it’s gonna be real memory… and then we’ll see what happens on less-than-modern computers that don’t have 2 GBs of memory to throw around just to read their daily web comics.

4) Javascript: V8

I like javascript and have no patience for its detractors. If you haven’t used the likes of prototype or jquery, you have no concept of what javascript is capable of or how it can be extended to do whatever you might possibly want to do. Having said that, Javascript can be slow… painfully slow… on underpowered computers (like my laptop, now three years old). You can hear it chugging away on some javascript code. It’s my observation, however, that it’s not the javascript engine at fault, it’s the javascript itself… folks relying too much on their framework and object oriented design and not enough on smart coding.

For example, I recently retooled a javascript library that reordered a sequence of pulldown menus (known as select elements in HTML lingo). The previous version of the library iterated through the list of selects SO many times, it wasn’t even funny (and I find most HTML/javascript base conversations to be hilarious!). So, although I had to sacrifice a bit of encapsulation to do it, I was able to rewrite the library to be significantly faster… and my CPU thanked me for the effort. So, what does this have to do with Chrome?

Well, Chrome has a new javascript engine, V8, which is supposed to be a lot faster for various reasons. I guess that’s great… but, at least for the vast majority of javascript code out there, the real problem isn’t the engine, it’s the code. Google has an answer for that too, but the day I choose to learn Java is the day I choose to dust off the law degree.

5) Gears Out-of-the-Box

When I first learned about Gears, I wasn’t excited. Then I went to Google I/O and I got a little excited so I tried it out… Firebug threw so many errors, and everything ran so slow, that I lost all my excitement the threw it out. I will say that the idea of a more robust javascript interface to the filesystem and to other hardware resources is a great idea… as is a persistent data storage system beyond cookies. But Google’s got an uphill battle here. Until the majority of users have Gears installed, or a browser with Gears like features, no web developer is going to utilize those tools, thus there will be no incentive for users to actually install them. I honestly have no clue how Flash managed to get installed on nearly every browser out there… but I don’t see how any plugin that is as invasive as Gears is going to be able to repeat that miracle a second time. So, Gears out of the box?! Yeah, just another browser with propriatary extensions that are tempting, but should not be used.

6) User Interface

I haven’t seen it yet, so I don’t know… one friend says it’s really hard to get used to. I reserve the right to be obstinate.

In Conclusion

Hell if I know… Google is a complete mystery. But, by and large, they haven’t steered me wrong, even if some believe what they are doing is more like sharecroping than software development. I’ll be the first to try Chrome soon as they release that Linux version… and while Google’s at it, maybe a Linux Picasa client?

probonogeek Technology

Don’t be Fooled by .us.com

July 1st, 2008

I got an email today from Network Solutions declaring “Is the .COM Domain You Want Taken? Get the .US.COM & Save” and thought to myself, “wow, they are finally starting to advertise the .us TLD!” Here in the States we sort of take the .com and .org top level domains for granted. But in much of the rest of the world websites use their country code TLD… so, in the United Kingdom you will see lots of .uk domains. Personally, I prefer this, as it helps identify the site’s situs (to use a legal term)… don’t believe the hype of pure virtual existence, websites have tangible form in the physical world.

Trouble with this advertisement from Network Solutions is that they are not, in fact, advertising a .us TLD… they are advertising subdomains of the .us.com domain. Note the .com is at the end, not preceding the .us like with .com.au (I just set one of these up yesterday, nothing special about Australia). So, what we’ve got going here is somebody (presumably Network Solutions or a subsidiary) spent the $20 necessary to register us.com–a process that is no different than when I registered prbonogeek.org–and is now going to sell subdomains of their domain for $20 per year and are passing it off as a “.COM Alternative!”

Now, I can’t speak for anyone else, but the idea of giving $20 to some dude who happened to buy the us.com domain when I could just as easily purchase a .us domain for the same price through a legitimate registrar, seems awfully silly. To further bolster my claim, have a look at the actual us.com site… looks like a google link farm to me. Having said that, if anyone wants to purchase subdomains for probonogeek.org, I’m offering them at the competitive price of only $15/y!

probonogeek Technology

Getting Back Up…

June 25th, 2008

The probonogeek.org server is starting to come back from the dead. I took down the slice following my recent hack and awaited instructions from my hosting provider. Sadly, this experience made them reconsider entering this business and they have terminated the beta slice program that in which I was a part. They pointed me towards slicehost, which is a competitor with Linode, which we use at work. Anyway, I thought it would be a good opportunity to try something new, so I signed up for a slice and got the ball rolling on a new server.

Remember kids, security first…

niles@zion:~/exploit$ ./exploit
-----------------------------------
Linux vmsplice Local Root Exploit
By qaaz
-----------------------------------
[+] mmap: 0x100000000000 .. 0x100000001000
[+] page: 0x100000000000
[+] page: 0x100000000038
[+] mmap: 0x4000 .. 0x5000
[+] page: 0x4000
[+] page: 0x4038
[+] mmap: 0x1000 .. 0x2000
[+] page: 0x1000
[+] mmap: 0x2b7638001000 .. 0x2b7638033000
[-] vmsplice: Bad address

Now I just need to restore my Subversion and Apache servers and I’ll be rocking and rolling once again!

probonogeek Technology

Hacked

June 18th, 2008

Today I received a very unhappy email from a fellow saying my webserver had launched an attack against his FTP server and that I needed to stop it or he would contact the Federal Authorities. I was unbelieving at first, to be perfectly honest, and asked him to produce logs verifying the attack. But then I went and checked my server and discovered it was running a script named ftp_scanner, which seemed to be attempting brute force attacks against random FTP servers. ack.

I quickly killed all the ftp_scanner processes, found the offending script on the server (cleverly hidden in /tmp/…/ so as to be both hidden from a standard ‘ls’ and appear like a system file when running ‘ls -a’). The immediate problem addressed, I tried to figure out how this could have happened. To my horror, I discovered that Thursday of last week someone had run a brute force attack against my SSH server and happened upon one of my users whose password was the same as her username. double ack!

A little back story is useful here… on Friday my server went down in a sort of funky way. I could still ping it, but http and ssh access were denied. It took all weekend working with my provider to get it re-enabled. They said it was because CPU usage had spiked, and since it’s a virtualized server, my slice was shutoff to prevent damage to the larger system. I should have investigated then, but I just figured the detection systems were borked and thought nothing of it. Bad idea.

Two days later, the intrepid attackers struck again… and I would never have known if not for the email from the poor guy whose server my server was attacking. But that’s not the worst of it. While cleaning things up, I noticed an SSH login to the ‘news’ account, which is a system user account that you cannot usually log into. It was then that I discovered the /etc/shadow password file had been compromised to enable a variety of logins that should not have been. This, unfortunately, was the worse possible news. If the attackers could change /etc/shadow, it meant they had manged to obtain root level access to my server. ack, ack, ack.

I went back to the /tmp/…/ folder to poke around the contents. It was then that I discovered the Linux vmsplice Local Root Exploit. And indeed, running the tests described my system was vulnerable, and the entire slice had been compromised. Since I don’t run tripwire, or anything like that, I was pretty much screwed. oh, ack…

All user data is now backed up onto my local desktop and the slice is scheduled to be cleared. Once the kernel is secured I will have to start building the system from the ground up all over again.

Oh, and if “Not Rick” is out there, I’m sorry to have caused you any trouble… but contacting me via means that prevent me from replying makes it difficult to apologize or explain the situation.

probonogeek Technology

Google vs. Privately Owned Community

June 2nd, 2008

This isn’t really a story about Google, but I was tipped off by a tech-legal blogger about the story because of Google’s involvement with the St. Paul suburb of North Oaks, Minnesota. The basic story boils down to (1) North Oaks residents actually own the roads in their town and have a trespassing ordinance, (2) Google violated that ordinance when it took photos of the town for its Street View program, (3) North Oaks city council requested the photos of the entire city be removed, (4) Google complied.

From a Public Relations standpoint, I have no argument with Google’s decision… however, I do think there is a dangerous first amendment precedent waiting in the wings here. In Marsh v. Alabama the U.S. Supreme Court ruled that First Amendment activity was still protected in the town of Chickasaw, Alabama even though every square inch of the town was private property owned by the Gulf Shipbuilding Corporation. The company had baned religious leafleting and the Court said the company was the State in that situation and thus must abide by the First Amendment.

I think the situation in Chickasaw, Alabama is analogues to North Oaks, Minnesota… except, instead of a for-profit company owning the streets, individuals bound by their deeds through the North Oaks Home Owners Association own the streets. But the situation is otherwise the same in that a private entity is attempting to get around the State Action doctrine by abolishing the State. But in so doing, they create a new State in all but name, and thus under Marsh must allow First Amendment activities. There remains the question of whether taking photos from streets is a First Amendment activity, a question I am not immediately familiar with, although I believe it is protected.

Either way, I imagine Google complied for the same reason it complies with requests from private citizens… it doesn’t have to under the law, but it does out of respect for privacy. My question now is what happens if a “citizen” of North Oaks, Minnesota writes to Google saying they wish to opt back into Street View?

probonogeek Law, Technology