commit 5621dd037ed1bae03c18babbf3c8fad90c16d76b Author: Jeremy Wall Date: Sun Apr 12 09:34:11 2020 -0400 Import from hg version of the site. diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..30d3331 --- /dev/null +++ b/.gitignore @@ -0,0 +1,2 @@ +generated* +.gost* diff --git a/_content/A-beautiful-thing-2005-5-11.yml b/_content/A-beautiful-thing-2005-5-11.yml new file mode 100644 index 0000000..271a890 --- /dev/null +++ b/_content/A-beautiful-thing-2005-5-11.yml @@ -0,0 +1,6 @@ +title: A beautiful thing +time: 2005-05-11 00:26:46 +section: Site-News +content-type: html +content: | +One of the beautiful things about web app's is the install. There is none. Nothing. Of course web apps aren't the only thing with that feature. I hear, although have not verified, that OS X's instal process for applicationsl is as simple as copying a folder over. Many open source applications are the same. copy a folder or a binary and your done. No editing the registry. No registering services. No mess. The uninstall? Just as easy.... delete the folder. Web apps are perhaps a little superior in the uninstall department. Just don't go back to the page. No install means no uninstall either. No registry corruption. No forgotten registry entries hanging around clogging your machine. No services left behind. No mess. Computers are supposed to make your life easier not harder. And Web apps are just one more way of doing that. diff --git a/_content/A-day-in-the-life-of-Marzhill-Mu-2006-1-24.yml b/_content/A-day-in-the-life-of-Marzhill-Mu-2006-1-24.yml new file mode 100644 index 0000000..4b2ce36 --- /dev/null +++ b/_content/A-day-in-the-life-of-Marzhill-Mu-2006-1-24.yml @@ -0,0 +1,6 @@ +title: A day in the life of Marzhill Musings +time: 2006-01-24 14:45:58 +section: Site-News +content-type: html +content: | +Click below and see a cross section of my visiting poplutation
A day in the life of Marzhill Musings diff --git a/_content/A-look-at-an-old-favourite-2005-4-29.yml b/_content/A-look-at-an-old-favourite-2005-4-29.yml new file mode 100644 index 0000000..8f6b499 --- /dev/null +++ b/_content/A-look-at-an-old-favourite-2005-4-29.yml @@ -0,0 +1,9 @@ +title: A look at an old favourite +time: 2005-04-29 02:45:06 +tags: + - CSS + - User-Interface + - links +content-type: html +content: | +css Zen Garden: The Beauty in CSS Design I took a look at an old favourite today. CSS Zen Garden still makes me awestruck. I am overcome by an urge to create something beautiful. Yet feel small when I compare my efforts to them. If you ever want to look at beauty on the web then just visit the above link. If you ever want to see what good designers can do with standards based tools then look at the above link. Maybe someday I will have a submission featured there. Who knows, stranger things have happened. CSS is as much a part of WebApp development as javascript, html, or xml are. It is what gives you the power to put a face on your app. It makes standardizing the interface to your app easier and more comprehensible with it's inheritance and cascading abilities. When you build you apps don't forget the visual design or the visual designer. Give him the tools to create beautiful things. Like here. diff --git a/_content/ABA-is-back-2006-5-1.yml b/_content/ABA-is-back-2006-5-1.yml new file mode 100644 index 0000000..4a2f82a --- /dev/null +++ b/_content/ABA-is-back-2006-5-1.yml @@ -0,0 +1,6 @@ +title: ABA is back +time: 2006-05-01 15:15:47 +section: Site-News +content-type: html +content: | +ABlogApart a collection of folks with thoughtful commentary (or commentary at least). Is up and running again after a long hiatus. Drop by and meet the folks if you feel like it. diff --git a/_content/According-to-Google-Analytics-2006-8-30.yml b/_content/According-to-Google-Analytics-2006-8-30.yml new file mode 100644 index 0000000..f3c3479 --- /dev/null +++ b/_content/According-to-Google-Analytics-2006-8-30.yml @@ -0,0 +1,6 @@ +title: According to Google Analytics... +time: 2006-08-30 21:33:24 +section: Site-News +content-type: html +content: | +The subject that drives the most interesting traffic to this site is Apache2 with mod_perl2 questions. To that end after I finish this major software project on a mod_perl platform I intend to write up a more detailed tutorial on actually using mod_perl2 in a production environment and what it can do for you. So watch this space :-) I'll be done with the project before too long. diff --git a/_content/Advanced-Nitrogen-Elements-2009-6-16.yml b/_content/Advanced-Nitrogen-Elements-2009-6-16.yml new file mode 100644 index 0000000..5835ec0 --- /dev/null +++ b/_content/Advanced-Nitrogen-Elements-2009-6-16.yml @@ -0,0 +1,74 @@ +title: Advanced Nitrogen Elements +time: 2009-06-16 23:33:58 +tags: + - Site-News + - erlang + - javascript + - nitrogen + - tutorial + - web-framework +content-type: html +content: | +In my last post I walked you through creating a basic nitrogen element. In this one I'll be covering some of the more advanced topics in nitrogen elements.

Nitrogen Event handlers

Nitrogen event handlers get called for any nitrogen event. A nitrogen event is specified by assigning #event to an actions attribute of a nitrogen element. The event handler in the pages module will get called with the postback of the event. Postbacks are an attribute of the event record and define the event that was fired. To handle the event you create an event function in the target module that matches your postback. For Example: + +% given this event +#event{ type=click, postback={click, Id} } +% this event function would handle it +event({click, ClickedId}) -> +io:format("I [~p] was clicked", [ClickedId]) . + +Erlangs pattern matching makes it especially well suited for this kind of event based programming. The one annoying limitation of this event though is that each page has to handle it individually. You could of course create a dispatching module that handled the event for you but why when nitrogen already did it for you. You can delegate an event to a specific module by setting the delegate attribute to the atom identifying that module. + +% delgated event +#event{ type=click, postback={click, Id}, delegate=my_module } + +You can delgate to any module you want. I use the general rule of thumb that if the event affects other elements on the page then the page module should probably handle it. If, however, the event doesn't affect other elements on the page then the element's module can handle it.

Scripts and Dynamic Postback

Now lets get make it a little more interesting. Imagine a scenario where we want to interact with some javascript on a page and dynamically generate data to send back to nitrogen. As an example lets create a silly element that grabs the mouse coordinates of a click on the element and sends that back to nitrogen. A first attempt might look something like so: + +-record(silly, {?ELEMENT_BASE(element_silly)}). +And the module is likewise simple: + +-module(element_silly). +-compile(export_all). +-include("elements.hrl"). +-include_lib("nitrogen/include/wf.inc"). +render(ControlId, R) -> + Id = wf:temp_id(), + %% wait!! where do we get the loc from?! + ClickEvent = #event{type=click, postback={click, Loc}} + Panel = #panel{id=Id, style="width:'100px' height='100px'", + actions=ClickEvent}, element_panel:render(Panel). + +event({click, Loc}) -> + wf:update(body, wf:f("you clicked at point: ~s", Loc)). + +Well of course you spot the problem here. Since the click happens client side we don't know what to put in the Loc variable for the postback. A typical postback won't work because the data will be generated in the client and not the Nitrogen server. So how could we get the value of the coordinates sent back? The javascript to grab the coordinates with jquery looks like this: +var coord = obj('me').pageX + obj('me').pageY; + To plug that in to the click event is pretty easy since action fields in an event can hold other events or javascript or a list combining both: + +Script = "var coord = obj('me').pageX + obj('me').pageY;", +ClickEvent = #event{type=click, postback={click, Loc}, actions=Script} + +Now we've managed to capture the coordinates of the mouse click, but we still haven't sent it back to the server. This javascript needs a little help. What we need is a drop box. Lets enhance our element with a few helpers: + +-module(element_silly). +-compile(export_all). +-include("elements.hrl"). +-include_lib("nitrogen/include/wf.inc"). +render(ControlId, R) -> + Id = wf:temp_id(), + DropBoxId = wf:temp_id(), + MsgId = wf:temp_id(), + Script = wf:f("var coord = obj('me').pageX + obj('me').pageY; $('~s').value = coord;", + [DropBoxId]), + ClickEvent = #event{type=click, postback={click, Id, MsgId}, + actions=Script}, + Panel = #panel{id=Id, style="width:'100px'; height='100px'", + actions=ClickEvent, body=[#hidden{id=DropBoxId}, + #panel{id=MsgId}]}, + element_panel:render(Panel). + +event({click, Id, Msg}) -> + Loc = hd(wf:q(Id)), + wf:update(Msg, wf:f("you clicked at point: ~s", Loc)). + +Ahhh there we go. Now our element when clicked will:
  1. use javascript to grab the coordinates of the mouse click
  2. use javascript to store those coordinates in the hidden element
  3. use a postback to send the click event back to a nitrogen event handler with the id of the hidden element where it stored the coordinates.
We have managed to grab dynamically generated data from the client side and drop it somehwere that nitrogen can retrieve it. In the process we have used an event handler, custom javascript, and dynamic javascript postbacks. Edit: Corrected typo - June 16, 2009 at 11:40 pm diff --git a/_content/All-for-lack-of-a-plug-2005-5-3.yml b/_content/All-for-lack-of-a-plug-2005-5-3.yml new file mode 100644 index 0000000..d001476 --- /dev/null +++ b/_content/All-for-lack-of-a-plug-2005-5-3.yml @@ -0,0 +1,6 @@ +title: All for lack of a plug +time: 2005-05-03 00:06:25 +section: Site-News +content-type: html +content: | +The site was down last evening again. Someone unplugged the network cable. I'm not sayin who, but he was short and his name starts with a T. Anyway the server is now located in a safer environment where such things should no longer occur. (and T has received a strong lecture on unplugging strange cords) diff --git a/_content/App-VS-Page-2005-4-11.yml b/_content/App-VS-Page-2005-4-11.yml new file mode 100644 index 0000000..dc62745 --- /dev/null +++ b/_content/App-VS-Page-2005-4-11.yml @@ -0,0 +1,6 @@ +title: App VS. Page +time: 2005-04-11 22:49:33 +section: Software-Development +content-type: html +content: | +Software Development on the Web is undergoing a revolution. We've had the ability to build responsive, useable, dynamic applications for quite a while now. But no one has capitalized on it. No one has been building those applications. Most web developers are still stuck in the WebPage mode of design and not the WebApp mode of design. Thankfully, companies like Google are starting to give the WebApp design philosophy some high profile attention with apps like Gmail and Google Maps. So what exactly is the difference? There are a number of radical differences between WebApp and WebPage Design modes. Each has a use in website designing. Webpage Design is about presenting information. It focuses on making the content readable, understandable, and locatable. Reference sites, and online literature sites do well with this design philosophy. Blogs are another instance where the WebPage Design philosophy works well. \ WebApp Design, on the other hand, focuses on responsive, dynamic, realtime action. Sites that allow the user to do something benefit the most from this approach. The Administration front ends to Reference sites, a WebMail site, and Online Game sites are all excellent candidates for the web app approach to design. Elements of these approaches to design are now getting combined in interesting ways. Google's autocomplete feature is one example. A melding of the two can only be beneficial to Web Development trends. In future articles I will be talking about some of the technologies that make these trends possible. diff --git a/_content/Are-You-a-Data-Middle-Man?-2006-1-17.yml b/_content/Are-You-a-Data-Middle-Man?-2006-1-17.yml new file mode 100644 index 0000000..0a2d6d0 --- /dev/null +++ b/_content/Are-You-a-Data-Middle-Man?-2006-1-17.yml @@ -0,0 +1,8 @@ +title: Are You a Data Middle Man? +time: 2006-01-17 16:29:22 +tags: + - Data + - Software-Development +content-type: html +content: | +Not too long ago, before the bubble burst as they say, one of the HOT new things was B2B technology. Hooking businesses together for their mutual profit. You don't hear a whole lot about that anymore. I think probably because those companies lost their focus and consequently never made any money. You see the real power of the "network" is in sharing data. B2B really was all about sharing that data. If you could emphasize that feature you could have made money. Becoming the middle guy in the selling and purchasing of data could become a very powerful and lucrative business. Especially since the new "emphasis" on standards is helping the process along. If you look at a lot of the hottest things in the web right now they all talk about sharing data of some sort. Flikr, Techdirt, Blog aggregators. All of them provide ways to access and share their data easily. And the investors are salivating. Notice I said "their" data. If you never have data to share no one uses your API/Standard. The reality is that Standards only work if someone shows you how to use them and uses them themselves. The Data Middle Men are the ones who will define these standards of exchange. They will be brokering the transfers and more importantly providing the infrastructure for those transfer. I've been thinking about this a lot lately because one of my customers has an opportunity to become one of the first of the Homecare Data Middle Men. It's gonna be a fun and wild ride :-) diff --git a/_content/Bear-with-me-2005-6-26.yml b/_content/Bear-with-me-2005-6-26.yml new file mode 100644 index 0000000..8870cef --- /dev/null +++ b/_content/Bear-with-me-2005-6-26.yml @@ -0,0 +1,6 @@ +title: Bear with me +time: 2005-06-26 06:07:53 +section: Site-News +content-type: html +content: | +I'm renovating. You may run across some different styles popping up every once and a while. I'm still figuring out how this wordpress themes thing works so there may be a some bumps in the road along the way. diff --git a/_content/Beauty-Artistry-and-computer-c-2006-2-21.yml b/_content/Beauty-Artistry-and-computer-c-2006-2-21.yml new file mode 100644 index 0000000..95c97fc --- /dev/null +++ b/_content/Beauty-Artistry-and-computer-c-2006-2-21.yml @@ -0,0 +1,8 @@ +title: Beauty, Artistry, and computer code. +time: 2006-02-21 10:08:57 +tags: + - Site-News + - Software-Development +content-type: html +content: | +I'd like to take a moment to wax poetic. Code hackers have a term for well written code. Elegant. We appreciate elegant design and algorithms in code. It's a pleasure to work on code like that. We will stop and just think "Man, that's beautiful" Even our quick and dirty scripts somehow turn out to be pieces of art. So what makes a piece of code beautiful? It's a little bit hard to describe but there are usually several elements that contribute to code's beauty. Those elements that compose what I perceive as beauty in code are: Efficiency Efficiency is perhaps one of the most important elements in the percieved beauty of code. Code that is streamlined, sleek, and targetted is beautiful. This kind of code does one thing and does it well. There is no wasted effort or duplicated work in efficient code. Efficient code knows what it needs to do and gets down to business. Many times this equates to less code though not always. Cleverness Cleverness is a close second in the elements of code beauty list. Cleverness as defined by "Now, that's a cool way to do it!!" It's coming up with a better and heretofore unconsidered method or algorithm to get the job done. It's similar to those paintings in art with surprise built into them. Like the excercises in perspective where you don't realize it's a painting till you get close. It makes you stop and go Wow! Now that's cool. Code cleverness usually has a lot to do with Efficiency. If your clever hack makes the code less efficient it may actually decrease your codes beauty. It can be a double edged sword. Style Style along with Flow are the subjective parts of the beauty equation. It means different things to different people. It's part of what makes someone love perl and hate python while a different person loves python and hates perl. Style encompasses such things Indenting, code organization, and naming conventions. Everyone has a different opinion of what looks good. Similar to art where one person likes modern art and another thinks it looks ridiculous. Flow Flow is also a highly subjective part of the beauty equation. Some people like to flowchart for days before even touching the keyboard. Others prefer to let the programs logic structure sort of organically grow. Still others prefer a balance somewhere between the two. Flow covers how your code handles the various tasks that it is responsible for. It encompasses reusability. And it can increase or decrease your code's efficiency. Like Art everyone has their own definition of what is beauty when it comes to Flow. So how do you classify beautiful code? Let me know in the comments. diff --git a/_content/Beryl-Scale-Plugin-2007-2-15.yml b/_content/Beryl-Scale-Plugin-2007-2-15.yml new file mode 100644 index 0000000..6f08e10 --- /dev/null +++ b/_content/Beryl-Scale-Plugin-2007-2-15.yml @@ -0,0 +1,6 @@ +title: Beryl Scale Plugin +time: 2007-02-15 15:31:00 +section: Site-News +content-type: html +content: | +The Beryl Scale window selection plugin diff --git a/_content/Beryl-XGL-The-CUBE-2007-2-22.yml b/_content/Beryl-XGL-The-CUBE-2007-2-22.yml new file mode 100644 index 0000000..62407d3 --- /dev/null +++ b/_content/Beryl-XGL-The-CUBE-2007-2-22.yml @@ -0,0 +1,6 @@ +title: Beryl/XGL The CUBE.... +time: 2007-02-22 16:37:57 +section: Site-News +content-type: html +content: | +I'm not sure how useful this actually is but you can't deny that it's pretty cool. May I present to you the Cube: Desktop Cube Now, like all the things Beryl can do, this is in realtime so you can watch progress bars move, text scroll, and movies play all while rotating your cube. diff --git a/_content/Beryl-XGL-Transparency-and-Thumb-2007-2-14.yml b/_content/Beryl-XGL-Transparency-and-Thumb-2007-2-14.yml new file mode 100644 index 0000000..770f3c2 --- /dev/null +++ b/_content/Beryl-XGL-Transparency-and-Thumb-2007-2-14.yml @@ -0,0 +1,6 @@ +title: Beryl/XGL Transparency and Thumbnailing +time: 2007-02-14 13:48:20 +section: Site-News +content-type: html +content: | +Beryl allows you to do realtime transparency and thumbnailing with windows. The following screenshot demonstrates these capabilities. There is a video playing underneath a transparent window and also playing on the thumbnail of the window over the taskbar. Unfortunately you can not see the video actually playing in the thumbnail and underneath the transparent window so you'll have to take my word for it. Its a powerful demonstration of how powerful the Beryl/XGL architecture can be. Transparency and Thumbnailing diff --git a/_content/Beryl-XGL-the-Scale-plugin-2007-2-15.yml b/_content/Beryl-XGL-the-Scale-plugin-2007-2-15.yml new file mode 100644 index 0000000..dc665a2 --- /dev/null +++ b/_content/Beryl-XGL-the-Scale-plugin-2007-2-15.yml @@ -0,0 +1,6 @@ +title: Beryl/XGL the Scale plugin +time: 2007-02-15 15:32:10 +section: Site-News +content-type: html +content: | +A particularly handy feature of Beryl is the Scale Plugin. This plugin gives an easy intuitive way to select the window you want to switch to on a busy desktop. It can be activated by setting a hotspot or using a hotkey. It takes all the open windows and tiles them in the screen scaling them to fit so you can pick the window you want to switch to easily. It also displays those windows in realtime so video keeps playing commandline console text keeps scrolling and so on. Here is a screenshot of this feature: Beryl Scale Plugin diff --git a/_content/BerylTabbing-2007-2-13.yml b/_content/BerylTabbing-2007-2-13.yml new file mode 100644 index 0000000..3a45a3e --- /dev/null +++ b/_content/BerylTabbing-2007-2-13.yml @@ -0,0 +1,6 @@ +title: BerylTabbing +time: 2007-02-13 12:40:45 +tags: Site-News +content-type: html +content: | +Two windows Tabbed for grouping in Beryl diff --git a/_content/Bill-Gates-He-eats-his-own-dogf-2006-4-6.yml b/_content/Bill-Gates-He-eats-his-own-dogf-2006-4-6.yml new file mode 100644 index 0000000..a4a0bfa --- /dev/null +++ b/_content/Bill-Gates-He-eats-his-own-dogf-2006-4-6.yml @@ -0,0 +1,6 @@ +title: Bill Gates. He eats his own dogfood. +time: 2006-04-06 09:09:26 +section: Site-News +content-type: html +content: | +Bill Gates the man behind Microsoft. Has published an interview with CNN on how he works. At the article: Bill Gates - How I work on money.cnn.com. Some of it no doubt is more advertising than informative however it is nice to see someone who actually uses what his company produces. The problem with Microsoft nowadays isn't the quality of their products. It's the price and the licensing. Microsoft has some great integrations in their products. If you can afford to purchase the sharepoint servers, Latest Office version, all the little office addons, and keep em up to date then by all means your office can experience a lot of internal productivity. The thing about Bill's job though is that he doesn't have to worry about incompatibilities within his organization or when dealing with other companies. Microsoft always has the latest versions of everything internally. Bill works in sort of an ivory tower where everything just works because they write the software. Out there in the real world though Microsoft is losing a very important battle. They are still fighting to keep things a Microsoft world. But the Genie is out of the bottle and eventually they will have to begin supporting open standards with no catches. People will stop caring about the nifty new feature if they can't use them anywhere they want on any platform. Microsoft in point of fact can't depend on everyone's work place being like microsofts anymore. I wonder what happens when Bill gets an email back saying: "Im sorry but we can't read your office 200x document please resend it in an open format like ODF, Plain Text, or RTF. Thank you." That day is coming we may not know when and I'm not stupid enough to try to predict it. but it is coming. The free market is running like it's supposed too and Microsoft is in for a rude awakening if they don't start preparing for it. Keep an eye on Minnesota. I have a feeling this trend we are seeing may accelerate exponentially. diff --git a/_content/Blender:-Its-more-than-a-modell-2007-3-12.yml b/_content/Blender:-Its-more-than-a-modell-2007-3-12.yml new file mode 100644 index 0000000..8d9f746 --- /dev/null +++ b/_content/Blender:-Its-more-than-a-modell-2007-3-12.yml @@ -0,0 +1,9 @@ +title: Blender - It's more than a modeller +time: 2007-03-12 23:00:55 +tags: + - Site-News + - Open-Source + - OSS-Apps +content-type: html +content: | +Blender is known as the most complete Open Source 3D modeller out there for open source. It also has a reputation for being one of those love it or hate it software packages with a steep learning curve. Blender is more than just a 3d content creation package though. It also happens to be perhaps the best video compositing and Non Linear editor available for open source. Cinelerra has a lot of power but isn't particularly great when you need to do a lot of keyframing. Jahshaka has a lot of potential but is unstable and still has a long way to go. Kino only does Digital video. But blender?.. Blender has it all. Blender is quite possibly the only package that gives you an end to end solution for content creation. What can blender do for you as a video editor? Well Just about anything actually. In can composite images and video/animations. It has a non linear video editor. It has an audio seqencer. In just under an hour I did the following short video clip using three still images. Compositing Still Image Test Video Blender has introduced node based compositing and as of 2.43 it has become quite powerful in that arena. The .blend file to do the effects seen above can be downloaded here. Hopefully it will show you some of the power that blender can bring to a video production pipeline. diff --git a/_content/BrickLayer-RC1-2005-11-19.yml b/_content/BrickLayer-RC1-2005-11-19.yml new file mode 100644 index 0000000..4f71459 --- /dev/null +++ b/_content/BrickLayer-RC1-2005-11-19.yml @@ -0,0 +1,13 @@ +title: BrickLayer RC1 +time: 2005-11-19 +timeformat: 2006-01-02 +tags: + - Site-News + - APIs + - BrickLayer + - Languages + - Perl + - Software-Development +content-type: html +content: | +My first release of BrickLayer. is ready. I'm still writing some of the documentation, but I couldn't resist giving you a peek. You can get it here: BrickLayer And here is the documentation I have written so far: Using BrickLayer BrickLayer Templating BrickLayer Plugin Development The BrickLayer DB Interface documentation is in progress. diff --git a/_content/BrickLayer-RC2-is-out-2006-1-25.yml b/_content/BrickLayer-RC2-is-out-2006-1-25.yml new file mode 100644 index 0000000..9703834 --- /dev/null +++ b/_content/BrickLayer-RC2-is-out-2006-1-25.yml @@ -0,0 +1,10 @@ +title: BrickLayer RC2 is out +time: 2006-01-25 13:41:38 +tags: + - Site-News + - BrickLayer + - Perl + - Software-Development +content-type: html +content: | +Bricklayer RC2 Download it while it's hot. I have added a number of bugfixes and enhancements. But it's still in testing so don't plan on using this in a production environment yet :-) diff --git a/_content/Bricklayer-Documenation-Update-2006-8-22.yml b/_content/Bricklayer-Documenation-Update-2006-8-22.yml new file mode 100644 index 0000000..110785e --- /dev/null +++ b/_content/Bricklayer-Documenation-Update-2006-8-22.yml @@ -0,0 +1,6 @@ +title: Bricklayer Documenation Update +time: 2006-08-22 12:23:49 +section: Site-News +content-type: html +content: | +I've revamped and rewritten the Bricklayer documentation to reflect some significant changes to the API. You can get it off the Sourceforge site or browse it online here using the link on the menu. Please let me know if you find any errors in spelling grammar or the API description. Also if there is any additional information you might want included in it. diff --git a/_content/Bricklayer-RC21-2006-2-8.yml b/_content/Bricklayer-RC21-2006-2-8.yml new file mode 100644 index 0000000..9a96b6a --- /dev/null +++ b/_content/Bricklayer-RC21-2006-2-8.yml @@ -0,0 +1,10 @@ +title: Bricklayer RC2.1 +time: 2006-02-08 23:55:44 +tags: + - Site-News + - APIs + - BrickLayer + - Perl +content-type: html +content: | +BrickLayer RC2.1 Bugfix and modifications to the Bricklayer RC2 release. Accompanied by amended documentation. diff --git a/_content/Bricklayer-Refactored-and-other-2007-1-16.yml b/_content/Bricklayer-Refactored-and-other-2007-1-16.yml new file mode 100644 index 0000000..2938ab8 --- /dev/null +++ b/_content/Bricklayer-Refactored-and-other-2007-1-16.yml @@ -0,0 +1,11 @@ +title: Bricklayer Refactored and other news +time: 2007-01-16 23:42:54 +tags: + - APIs + - BrickLayer + - Open-Source + - Perl + - Software-Development +content-type: html +content: | +Bricklayer has been heavily refactored for more modularity. You can see the much changed codebase at the new svn repository Instructions for SVN Checkout can be found here: SourceForge SVN Checkout Current is the trunk branch in the repository tree structure. Documentation and some examples for the new API will be forthcoming shortly. Including the new DataBase Access API concept I will be introducing. A preview of the proof of concept code is in the Bricklayer/Data libraries directory. diff --git a/_content/Bricklayer-Subversion-Repository-2006-1-30.yml b/_content/Bricklayer-Subversion-Repository-2006-1-30.yml new file mode 100644 index 0000000..d5cd6a2 --- /dev/null +++ b/_content/Bricklayer-Subversion-Repository-2006-1-30.yml @@ -0,0 +1,11 @@ +title: Bricklayer Subversion Repository is up +time: 2006-01-30 10:27:57 +tags: + - Site-News + - APIs + - BrickLayer + - Perl + - Software-Development +content-type: html +content: | +I have finished moving my bricklayer subversion source code repository to a public location. you can find it at the following link. If your browser recognizes the svn protocol you can just click the link below. For all others you'll need to copy the link location into a subversion browser of some sort. Now you can get the cutting edge copy :-) enjoy!! BrickLayer Subversion Repository diff --git a/_content/Bricklayer-is-Coming-2005-11-16.yml b/_content/Bricklayer-is-Coming-2005-11-16.yml new file mode 100644 index 0000000..444f33d --- /dev/null +++ b/_content/Bricklayer-is-Coming-2005-11-16.yml @@ -0,0 +1,6 @@ +title: Bricklayer is Coming.... +time: 2005-11-16 23:18:49 +section: Site-News +content-type: html +content: | +
BrickLayer Logo Details to follow.
diff --git a/_content/Bricklayer-is-on-Sourceforge-2006-7-27.yml b/_content/Bricklayer-is-on-Sourceforge-2006-7-27.yml new file mode 100644 index 0000000..8c06e70 --- /dev/null +++ b/_content/Bricklayer-is-on-Sourceforge-2006-7-27.yml @@ -0,0 +1,6 @@ +title: Bricklayer is on Sourceforge +time: 2006-07-27 21:26:31 +section: Site-News +content-type: html +content: | +SourceForge project page Have a look if you want. diff --git a/_content/Bricklayer::Templater-is-getting-2007-7-14.yml b/_content/Bricklayer::Templater-is-getting-2007-7-14.yml new file mode 100644 index 0000000..6ce662a --- /dev/null +++ b/_content/Bricklayer::Templater-is-getting-2007-7-14.yml @@ -0,0 +1,6 @@ +title: "Bricklayer::Templater is getting unit tests" +time: 2007-07-14 19:16:38 +section: Site-News +content-type: html +content: | +Yay I'm finally getting around to it and boy is it a good thing. The code is already benefitting from it. I'll be uploading some stuff to the sourceforge svn repository soon so stay tuned. diff --git a/_content/Bricklayer::Templater-is-on-CPAN-2007-8-14.yml b/_content/Bricklayer::Templater-is-on-CPAN-2007-8-14.yml new file mode 100644 index 0000000..5b1768d --- /dev/null +++ b/_content/Bricklayer::Templater-is-on-CPAN-2007-8-14.yml @@ -0,0 +1,12 @@ +title: "Bricklayer::Templater is on CPAN" +time: 2007-08-14 16:20:30 +tags: + - Site-News + - APIs + - BrickLayer + - Open-Source + - Perl + - Software-Development +content-type: html +content: | +And I have registered its namespace so it shows up in the module list. This means the Bricklayer::* namespace can now be used to begin build the various components of my evolving frameworks Bricklayer::Templater diff --git a/_content/Creating-Custom-Nitrogen-Element-2009-5-22.yml b/_content/Creating-Custom-Nitrogen-Element-2009-5-22.yml new file mode 100644 index 0000000..ae1727f --- /dev/null +++ b/_content/Creating-Custom-Nitrogen-Element-2009-5-22.yml @@ -0,0 +1,167 @@ +title: Creating Custom Nitrogen Elements +time: 2009-05-22 19:05:57 +tags: + - Site News + - ajax + - erlang + - event driven + - nitrogen + - tutorial + - functional + - web framework +section: coding +content-type: html +content: | +Nitrogen is a web framework written in erlang for Fast AJAX Web applications. You can get Nitrogen on github Nitrogen comes with a set of useful controls, or elements in nitrogen parlance, but if you are going to do anything more fancy than a basic hello world you probably want to create some custom controls. This tutorial will walk you through the ins and outs of writing a custom element for Nitrogen. We will be creating a simple notification element similar to one I use in the Iterate! project. It will need to be able to: Every Nitrogen element has two main pieces: the Record and the Module. I'll go through each in order and walk you through creating our notification element.

The Record

The record defines all the state required to create a nitrogen element. Every record needs a certain base set of fields. These fields can be added to your record with the ?ELEMENT_BASE macro. The macro is available in the nitrogen include file wf.inc. That include file also gives you access to all the included nitrogen element records. Below you can see the record definition for our notify element. Since it is very simple in it's design it only needs the base elements and two additional fields. expire to handle our optional expiration time and default to false to indicate no expiration. msg to hold the contents of our notification. +
+%Put this line in an include file for your elements
+-record(notify, {?ELEMENT_BASE(element_notify), expire=false, msg}).
+
+
+% put these at the top of your elements module
+-include_lib("nitrogen/include/wf.inc").
+% the above mentioned include file you may call it whatever you want
+-include("elements.inc").
+
+The ELEMENT_BASE macro gives your element several fields and identifies for the element which module handles the rendering of your nitrogen element. You can specify any module you want but the convention is to name the module with element_element_name. The fields provided are: id, class, style, actions, and show_if. You can use them as you wish when it comes time to render your element. Which brings us to the module.

The Module

Of the two pieces of a nitrogen element the module does the manual labor. It renders and in some cases defines the handlers for events fired by the element. The module must export a render/2 function. This function will be called whenever nitrogen needs to render a particular instance of your element. It's two arguments are: The ControlId, and the Record defining this element instance. Of these the ControlID is probably the least understood. It is passed into your render method by nitrogen and is the assigned HTML Id for your particular element. This is important to understand because, when you call the next render method in your elements tree, you will have to pass an ID on. The rule of thumb I use is that if you want to use a different Id for your toplevel element then you can ignore the ControlId. Otherwise you should use it as the id for your toplevel element in the control. So your element's module should start out with something like this: +
+-module(element_notify).
+-compile(export_all).
+-include_lib("nitrogen/include/wf.inc").
+-include("elements.hrl").
+% give us a way to inspect the fields of this elements record
+% useful in the shell where record_info isn't available
+reflect() -> record_info(fields, notify).
+% Render the custom element
+render(ControlId, R) ->
+    % get a temp id for our notify element instance
+    Id = ControlId,
+    % Our toplevel of the element will be a panel (div)
+    Panel = #panel{id=Id},
+    % the element_panel module is used to render the panel element
+    element_panel:render(Id, Panel),
+    % Or use the alternative method:
+    Module = Panel#panel.module,
+    Module:render(Id, Panel).
+
+Notice that the records module attribute tells us what module we should call to render the element in the alternative method. In our case we will just hardcode the module since it's known to us. So now we have a basic element that renders a div with a temp id to our page. That's not terribly useful though. We actually need this element to render our msg, and with some events attached. Lets add the code to add our message to the panels contents. +
+Panel = #panel{id=Id, body=R#notify.msg},
+element_panel:render(ControlId, Panel)
+
+Now whatever is in the msg attribute of our notify record will be in the body of the panel when it gets rendered. All we need is a way to dismiss it. A link should do the trick. But now we have a slight problem. In order to add our dismiss link we need to add it to the body of the Panel. but the msg is already occupying that space. We could use a list and prepend the link to the end of the list for the body but that doesn't really give us a lot of control over styling the element. what we really need is for the msg to be in an inner panel and the outer panel will hold any controls the element needs. +
+Link = #link{text="dismiss"},
+InnerPanel = #panel{body=R#notify.msg},
+Panel = #panel{id=Id, body=[InnerPanel,Link]},
+element_panel:render(ControlId, Panel)
+
+Our link doesn't actually dismiss the notification yet though. To add that we need to add a click event to the link. Nitrogen has a large set of events and effects available. You can find them . We will be using the click event and the hide effect. +
+Event = #event{type=click,
+actions=#hide{effect=blind, target=Id}},
+Link = #link{text="dismiss", actions=Event},
+
+Now our module should look something like this: +
+-module(element_notify).
+-compile(export_all).
+-include_lib("nitrogen/include/wf.inc").
+-include("elements.hrl").
+% give us a way to inspect the fields of this elements record
+% useful in the shell where record_info isn't available
+reflect() -> record_info(fields, notify).
+% Render the custom element
+render(ControlId, R) ->
+    % get a temp id for our notify element instance
+    Id = ControlId,
+    % Our toplevel of the element will be a panel (div)
+    Event = #event{type=click, actions=#hide{effect=blind, target=Id}},
+    Link = #link{text="dismiss", actions=Event},
+    InnerPanel = #panel{body=R#notify.msg},
+    Panel = #panel{id=Id, body=[InnerPanel,Link]},
+    % the element_panel module is used to render the panel element
+    element_panel:render(Id, Panel).
+
+This is a fully functional nitrogen element. But it's missing a crucial feature to really shine. Our third feature for this element was an optional expiration for the notification. Right now you have to click dismiss to get rid of the element on the page. But sometimes we might want the element to go away after a predetermined time. This is what our expire record field is meant to determine for us. There are three possible cases for this field. This is the kind of thing erlang's case statement was made for: +
+case R#notify.expire of
+  false ->
+    undefined;
+  N when is_integer(N) ->
+    % we expire in this many seconds
+    wf:wire(Id, #event{type='timer', delay=N, actions=#hide{effect=blind, target=Id}});
+  _ ->
+    % log error and don't expire
+    undefined
+end
+
+Notice the wf:wire statement. wf:wire is an alternate way to add events to a nitrogen element. Just specify the id and then the event record/javascript string you want to use. I've noticed that for events of type timer wf:wire works better than assigning them to the actions field of the event record. No idea why because I have not looked into it real closely yet. Now our module looks like this: +
+-module(element_notify).
+-compile(export_all).
+-include_lib("nitrogen/include/wf.inc").
+-include("elements.hrl").
+% give us a way to inspect the fields of this elements record
+% useful in the shell where record_info isn't available
+reflect() ->record_info(fields, notify).
+% Render the custom element
+render(_, R) ->
+  % get a temp id for our notify element instance
+  Id = ControlId,
+  % Our toplevel of the element will be a panel (div)
+  case R#notify.expire of
+    false ->
+      undefined;
+    N when is_integer(N) ->
+      % we expire in this many seconds
+      wf:wire(Id, #event{type='timer', delay=N, actions=#hide{effect=blind, target=Id}});
+    _ ->
+      % log error and don't expire
+      undefined
+  end,
+  Event = #event{type=click, actions=#hide{effect=blind, target=Id}},
+  Link = #link{text="dismiss", actions=Event},
+  InnerPanel = #panel{body=R#notify.msg},
+  Panel = #panel{id=Id, body=[InnerPanel,Link]},
+  % the element_panel module is used to render the panel element
+  element_panel:render(ControlId, Panel).
+
+We have now fulfilled all of our criteria for the element. It shows a message of our choosing. It can be dismissed with a click. And it has an optional expiration. One last thing to really polish it off though would to allow styling through the use of css classes. The ELEMENT_BASE macro we used in our record definition gives our element a class field. We can use that to set our Panel's class, allowing any user of the element to set the class as they wish like so: +
+Panel = #panel{id=Id, class=["notify ", R#notify.class],
+body=[InnerPanel,Link]},
+
+This gives us the final module for our custom element: +
+-module(element_notify).
+-compile(export_all).
+-include_lib("nitrogen/include/wf.inc").
+-include("elements.hrl").
+% give us a way to inspect the fields of this elements record
+% useful in the shell where record_info isn't available
+reflect() -> record_info(fields, notify).
+  % Render the custom element
+  render(_, R) ->
+  % get a temp id for our notify element instance
+  Id = ControlId,
+  % Our toplevel of the element will be a panel (div)
+  case R#notify.expire of
+    false ->
+      undefined;
+    N when is_integer(N) ->
+      % we expire in this many seconds
+      wf:wire(Id, #event{type='timer', delay=N, actions=#hide{effect=blind, target=Id}});
+    _ ->
+      % log error and don't expire
+      undefined
+  end,
+  Event = #event{type=click, actions=#hide{effect=blind, target=Id}},
+  Link = #link{text="dismiss", actions=Event},
+  InnerPanel = #panel{body=R#notify.msg},
+  Panel = #panel{id=Id, class=["notify ", R#notify.class],
+  body=[InnerPanel,Link]},
+  % the element_panel module is used to render the panel element
+  element_panel:render(ControlId, Panel).
+
+I will cover delegated events and more advanced topics in a later tutorial. diff --git a/_content/Creation-Programming-and-Easte-2006-3-27.yml b/_content/Creation-Programming-and-Easte-2006-3-27.yml new file mode 100644 index 0000000..f3e3135 --- /dev/null +++ b/_content/Creation-Programming-and-Easte-2006-3-27.yml @@ -0,0 +1,6 @@ +title: Creation, Programming, and Easter Eggs +time: 2006-03-27 11:23:04 +section: Site-News +content-type: html +content: | +In programming we have the term Easter Egg. It refers to hidden functionality in an application. Many times hidden even from the management. Some of you may remember the famous pinball game or Flight simulator hidden in the Office 97 Apps. Games are another popular place for programmers to hide Easter Eggs. They are kind of the Programmers little joke for the User of their software. Easter Eggs are fun and can really brighten a dull day. Some of you may know that I consider programming a creative art. In fact I consider some programming to be not unlike painting or writing poetry. For those who think like that, Easter eggs are like those hidden messages artists will sometimes hide in a painting. If you know me very well you will also know I'm a christian. I believe our capacity to create and enjoy beauty and surprise are a trait that comes from being made in Gods Image. When you think about it Creating the Universe was kind of like programming a work of art. God is the ultimate Hacker. He encoded the biology of whole Species into a few strands of DNA. He wrote the rules of Physics, and designed how Numbers work. His code is so beautiful it would move Linus Torvalds himself to tears. And like many coders I think God added a few Easter Eggs to his work. So the next time you hear of some funny coincidence that just strikes you as a little odd and somehow funny just imagine God in the beginning of Time turning to the Holy Spirit and saying "Wait till they get a load of this one..." diff --git a/_content/Data::Annotated-and-Class::Data:-2007-8-27.yml b/_content/Data::Annotated-and-Class::Data:-2007-8-27.yml new file mode 100644 index 0000000..66a9062 --- /dev/null +++ b/_content/Data::Annotated-and-Class::Data:-2007-8-27.yml @@ -0,0 +1,6 @@ +title: "Data::Annotated and Class::Data::Annotated" +time: 2007-08-27 16:43:45 +section: Site-News +content-type: html +content: | +So I've added two new modules to my CPAN repertoire. Data::Annotated and Class::Data::Annotated. Data::Annotated is a module intended to hold an annotation about a piece of a data structure independently of the Data::Structure itself. The annotation can be anything a hash an array or a scalar value. The piece of the data structure is referenced by a Data::Path. Class::Data::Annotated wraps a perl data structure and an associated set of annotations together in one place. I've also added to Data::Path's functionality so that it can annotate object methods and coderefs stored in a data structure. Once I've ironed out details with the original author I'll hopefully be uploading that. Anyway feel free to check them out: My CPAN Libraries diff --git a/_content/Debugging-Axiom-#1-2007-2-28.yml b/_content/Debugging-Axiom-#1-2007-2-28.yml new file mode 100644 index 0000000..1effab5 --- /dev/null +++ b/_content/Debugging-Axiom-#1-2007-2-28.yml @@ -0,0 +1,8 @@ +title: Debugging Axiom #1 +time: 2007-02-28 13:02:04 +tags: + - Site-News + - Software-Development +content-type: html +content: | +There is an axiom coined I believe by Sir Arthur Conan Doyle in his Sherlock Holmes novels that every programmer should keep in mind while debugging his program.
When you have eliminated every other possibility; Whatever is left, however improbable, Must be the Solution.
There has been many a time when I could have arrived at the answer much sooner but was stuck because what the program was doing seemed to be impossible. When I accepted that it was possible then I was able to begin tracking down how it was possible and thus finding the solution. After one too many occurences of this I think I'm going to make a big poster with this axiom on it and hang it above my monitor. diff --git a/_content/Demo-of-Iterate-erlang-project-2009-4-11.yml b/_content/Demo-of-Iterate-erlang-project-2009-4-11.yml new file mode 100644 index 0000000..bf4ce1e --- /dev/null +++ b/_content/Demo-of-Iterate-erlang-project-2009-4-11.yml @@ -0,0 +1,14 @@ +title: Demo of Iterate! erlang project number two is up +time: 2009-04-11 22:08:25 +tags: + - Site-News + - erlang + - iterate + - nitrogen +content: | +Iterate*!* is my Scrum style project management tool. Inspired by my dislike +for XPlanners UI. Iterate*!* was started about a month ago and is coded in +[erlang](http://erlang.org/) using +[mochiweb](http://code.google.com/p/mochiweb/), and +[nitrogen](http://nitrogenproject.com/). Kick the tires and whatnot if you +want to see it in action: http://iterate.marzhillstudios.com:8001/ diff --git a/_content/Desktop-Cube-2007-2-22.yml b/_content/Desktop-Cube-2007-2-22.yml new file mode 100644 index 0000000..32c1f85 --- /dev/null +++ b/_content/Desktop-Cube-2007-2-22.yml @@ -0,0 +1,6 @@ +title: Desktop Cube +time: 2007-02-22 16:36:50 +section: Site-News +content-type: html +content: | +Desktop Cube with 3d window effects diff --git a/_content/Did-you-ever-need-to-index-an-xm-2006-5-18.yml b/_content/Did-you-ever-need-to-index-an-xm-2006-5-18.yml new file mode 100644 index 0000000..f305c1d --- /dev/null +++ b/_content/Did-you-ever-need-to-index-an-xm-2006-5-18.yml @@ -0,0 +1,11 @@ +title: Did you ever need to index an xml doc +time: 2006-05-18 16:26:57 +tags: + - Data + - Languages + - Perl + - Software-Development + - XML +content-type: html +content: | +and preserve the xml information in the index? May I present "the XML Indexer". My brother, who's very populer AJAX Bible app has been getting attention, needed an xml index of the KJV Bible. He asked if I could help him get it. We would be parsing the KJV in XML format and I needed to pull out the reference information for every occurence of every word. Well I thought an xml indexer might be useful in more than one capacity and there wasn't much on the net or cpan with the capability to do it. It needed to be light and fast because it was going to be parsing the entire bible so a DOM parser was out of the question. So I wrote my own. xml_indexer.pm is a module to index the words in an xml document and preserve the xml information about each occurence of the word. It's a little rough around the edges right now but it works. It uses the expat parser so it's light and fast. Look at the bible_index.pl script for an example of how it works. I'll do a tutorial on it later. Update: This baby has been confirmed to parse the entire bible in Zaphania xml format in under 3 minutes. That is a 16 MB file. It spits out a 23 MB index in that space of time. Quite honestly it surprised me. diff --git a/_content/Do-people-really-believe-this-st-2005-9-28.yml b/_content/Do-people-really-believe-this-st-2005-9-28.yml new file mode 100644 index 0000000..e212a19 --- /dev/null +++ b/_content/Do-people-really-believe-this-st-2005-9-28.yml @@ -0,0 +1,6 @@ +title: Do people really believe this stuff? +time: 2005-09-28 21:17:31 +section: Site-News +content-type: html +content: | +FOXNews.com - Views - Massachusetts Should Close Down OpenDocument Man the litany of folks not getting it just keeps growing and growing. I'm going to say this one more time. Open Source is here to stay. Get over it and start learning how to do business in the new environment. And if someone can explain how Massachusets choosing an Open Document "Standard" kills competition I'd like to know. I mean surely letting non microsoft providers bid on government contracts would increase not decrease competition. And no one said microsoft couldn't offer the document format themselves. Microsoft created this environment now they can sink or swim with the rest of us in it. diff --git a/_content/Error-handling-Erlang-vs-Other-2009-4-11.yml b/_content/Error-handling-Erlang-vs-Other-2009-4-11.yml new file mode 100644 index 0000000..4964c86 --- /dev/null +++ b/_content/Error-handling-Erlang-vs-Other-2009-4-11.yml @@ -0,0 +1,13 @@ +title: Error handling (Erlang vs Other languages) +time: 2009-04-11 09:59:12 +tags: + - Site-News + - best-practices + - coding + - erlang + - functional-programming +content-type: html +content: | +When I'm in Perl, Javascript, Java, or any other programming language I prefer to throw exceptions rather than return errors. This is because I've long since gotten tired of losing errors 5 calls deep in the code because I forgot to check the return and handle it. Nothing is more annoying than trying to debug a problem that is actually caused somewhere else, but you don't know because the error vanished into the that Great Heap in the Sky around about 5 calls down. You've been there before. That's when you start adding print statements followed by an exit or, if your lucky enough to have one, firing up the debugger and using break statements to narrow down the actual source of the problem. This process could take days depending on how far removed from the breaking code the actual problem is. In erlang, however, returning errors actually turns out to be useful. This is because in erlang returning errors actually has the desired effect. Usually, when you return errors in an app, it is because you don't want to kill the whole program when something breaks. What you intend is for the caller to inspect the return and do the appropriate thing. This however runs smack into the whole
programmers are fallible people, who sometimes stay up late coding at 3am, when they can hardly see the screen anymore
problem. In short you are depending on the caller to honor your contract and do the right thing even though we often do exactly the wrong thing. Even when you are the caller, sometimes you don't honor that contract. Then a month later you are doing that print statement and die or perhaps the debugger dance again. All of this because of one night when your judgement lapsed. Companies will often develop elaborate style guides, testing strategy, and code-review cultures to prevent this, but in the end most developers I know come to love throwing exceptions... unless they are coding in erlang. This is because of two elements of the erlang language design: In erlang returning errors requires the caller to handle them. If I try to store the return of a function and it doesn't match what I told it to expect then bang, an automatic exception. However, if I try to store the return and explicitely handle the error case then no exception is thrown. This has the wonderful effect of forcing me to think about exactly what I expect this function to return, and handling or ignoring at my choice. The big benefit is that I had to think about it. The below example illustrates the difference. + +The combination of Pattern Matching and Fail Fast in erlang forces the programmer to honor the contract, whether he wants to or not. This is one case where Erlang follows the "Do What I Need Not What I Want" principle in a language properly. diff --git a/_content/Ever-wished-your-windows-box-2005-5-16.yml b/_content/Ever-wished-your-windows-box-2005-5-16.yml new file mode 100644 index 0000000..7e342f4 --- /dev/null +++ b/_content/Ever-wished-your-windows-box-2005-5-16.yml @@ -0,0 +1,6 @@ +title: Ever wished your windows box... +time: 2005-05-16 01:10:02 +section: Site-News +content-type: html +content: | +could email you when certain events happened? It has an eventlog why can't it just email when it sees an event occur? Well I decided it was time to add just that functionality. So.. . I proudly present EventNotifierV2.pl. A script that checks your windows eventlog for events and emails you when it sees them. Surely someone else has a need for this. diff --git a/_content/Example-shortcut-2006-6-22.yml b/_content/Example-shortcut-2006-6-22.yml new file mode 100644 index 0000000..f577bfb --- /dev/null +++ b/_content/Example-shortcut-2006-6-22.yml @@ -0,0 +1,6 @@ +title: Example shortcut +time: 2006-06-22 15:15:34 +section: Site-News +content-type: html +content: | +Shortcut that launches dtach and fires a shell to use in it diff --git a/_content/FOAF-2006-1-25.yml b/_content/FOAF-2006-1-25.yml new file mode 100644 index 0000000..b42e75c --- /dev/null +++ b/_content/FOAF-2006-1-25.yml @@ -0,0 +1,6 @@ +title: FOAF +time: 2006-01-25 14:18:04 +section: Site-News +content-type: html +content: | +I've been FOAF'ized Do you have a FOAF page yet? diff --git a/_content/First-Draft-of-the-Bricklayer-Do-2005-12-7.yml b/_content/First-Draft-of-the-Bricklayer-Do-2005-12-7.yml new file mode 100644 index 0000000..03fdc9e --- /dev/null +++ b/_content/First-Draft-of-the-Bricklayer-Do-2005-12-7.yml @@ -0,0 +1,11 @@ +title: First Draft of the Bricklayer Documentation +time: 2005-12-07 23:44:39 +tags: + - Site-News + - BrickLayer + - Languages + - Perl + - Software-Development +content-type: html +content: | +I just finished the first draft of the Bricklayer development manual. You can see it here: Bricklayer Manual Take a look and tell me if you see any thing that might need more clarification or spelling correction. diff --git a/_content/First-Look-at-Polymer-Elements-2013-09-17.yaml b/_content/First-Look-at-Polymer-Elements-2013-09-17.yaml new file mode 100755 index 0000000..91be101 --- /dev/null +++ b/_content/First-Look-at-Polymer-Elements-2013-09-17.yaml @@ -0,0 +1,171 @@ +title: First Look at Polymer Elements +author: Jeremy Wall +time: 2013-09-17 +timeformat: 2006-01-02 +content-type: markdown +tags: + - site-news + - polymer + - web + - w3c + - webcomponents +content: | +Introduction +============ + +For work I've been getting up to speed on the W3C's set of webcomponents standards. Which means I've been looking at [Polymer](http://polymer-project.org). Polymer is both an experimental javascript framework and a shim library that simulates the portions of the W3C standard which are not yet implemented in browsers. Specifically I've been looking at [Custom Elements](http://www.w3.org/TR/components-intro/#custom-element-section) and [Templates](http://www.w3.org/TR/components-intro/#template-section) since they are the more concrete portions of the standard right now. + +At some point when you are exploring a new technology the docs and tutorials stop being useful to you and to really get a feel you have to build something in it. I decided to port parts of a javascript application, [DynamicBible](http://dynamicbible.com), that I had already written as a learning exercise. [DynamicBible](http://dynamicbible.com) currently uses [requirejs](http://requirejs.org) for it's javascript code as a way to manage dependencies and keep scoping sane. This made it perfect for my purposes since it allowed me to explore two types of polymer elements. + +* UI elements. +* Utility elements that don't have a visible presence on the page. + +For my purposes I ported the DynamicBible search box and a requirejs importer element. In this article I'll cover the search box element. The requirejs element will be covered a later article. + +Creating your own Polymer Element +================================= + +``` html + + + + +``` + +The `` is the declarative way to define your polymer element in html itself. The bare minimum to define a polymer element would be + +``` html + + + +``` + +This is as about as useful as the span element though and html already has a span element. We need a little more than this to be worth it. Our element needs some attributes and behavior. Polymer lets us describe the expected attributes using an attribute called, what else, `attributes`. + +``` html + +``` + +As for the behavior to attach to this element, that brings us the the Polymer construction function. + +``` html + + + +``` + +You can use the element the same way you would any other html element. + +``` html + +``` + +Now our element has a method on it that will submit a search using the value of our search-box query attribute. We could trigger this behavior right now with javascript. + +``` js +document.querySelector('#searchBox').query = "what is a polymer element?"; +document.querySelector('#searchBox').search(); +``` + +It's kind of silly that we have to do that manually with javascript though. What we really want is for this element to detect changes in our query and perform the search for us. + +``` html + + + +``` + +Now when the element's query attribute changes a search is triggered. + +Up to now our element hasn't been very visible. We need to give it an image boost. We can do this two different ways. + +1. Using a template +2. Using element inheritance + +We'll go with the template element for now. The element inheritance will come in handy later. + +``` html + + + + +``` + +There are a number of things going on in this template element. First we define some html that will be used whenever we render the element on a page. There is some complicated merging logic involving the [shadow dom](http://www.w3.org/TR/components-intro/#shadow-dom-section) but we'll ignore that for now. Second our value attribute on the the input element has funny looking content. The `{{ query }}` tells polymer that the contents of the text box and the query a should be kept in sync. A change to one of them is reflected in the other. Furthermore a change to the input box will result in the queryChanged event firing and causing a search to get submitted. There are several more built in events and Polymer includes a way to create and listen to your own events. as well. + +I'll cover a utility element that shims requirejs modules to make them useable in your polymer elements in a later article. + +Out elements template element isn't terribly complicated and it turns out in our case is completely unnecessary. We can use `extends` to tell Polymer our element should inherit from the input element. + +Our last tweak to the search-box element looks like this. + +``` html +// input already defines the attributes we need + + + +``` \ No newline at end of file diff --git a/_content/Functional-Programming-vs-OO-Pro-2009-4-26.yml b/_content/Functional-Programming-vs-OO-Pro-2009-4-26.yml new file mode 100644 index 0000000..b06b26b --- /dev/null +++ b/_content/Functional-Programming-vs-OO-Pro-2009-4-26.yml @@ -0,0 +1,12 @@ +title: Functional Programming vs OO Programming +time: 2009-04-26 20:52:39 +tags: + - Site-News + - comparison + - erlang + - functional-programming + - insanity + - OO +content-type: html +content: | +I've been thinking lately about the differences between OO and FP. FP languages don't really support the OO model because of a difference in design. Some try but the "true" functional languages make no such effort and for good reason. One of the primary principles of OO programming is the black box. An object is meant to represent some entity in the system. All the state and valid actions for that entity are encapsulated in the Object and the only visibility you have into them is the interface the object provides to the outside world. Programs tend to be defined as a set of interactions between objects. FP takes a different approach. In FP data is not modifiable only transformable by functions. In FP the functions expect certain input and give back certain output for that input. They certainly don't maintain any state on their own. You can't have an object if you can't have modifiable internal state. In OO you are encouraged to group together state and activity on that state. In OO you don't think so much about state as you do about entities. Person, Car, ATM machine. All of these are commonly referenced objects in an OO tutorial or textbook. Consumers of your object are encouraged not to think about the state of a person or a car or an ATM machine. Instead they think in terms of allowable interactions with that object: Talk, Drive, Withdraw Money. Within OO code there are a huge number of possible orders of various interactions possible. So large they are just about impossible to completely test or even visualize. For example say I'm programming a person object. I've given the person object a handshake method. However, depending on the internal state of the person object that method may or may not work the same way or be supposed to be called. Maybe the person object is in a bad mood, in which case the handshake method is non-functional. How do we handle that? Hrmm, so first before calling handshake on the person object we should call the offer hand method. No wait, before we call the offer hand method, perhaps we should look at their face first to gauge their mood. This could get complicated pretty quickly. Furthermore there is nothing in the OO language that allows us to specify that the offer hand method should happen first. We obviously need to know quite a bit about the state of that person object in order to properly interact with it. Contrast that with an FP approach. This gets a little bit simpler for some people to visualize suddenly. Rather than an object, they see state and a set of possible transformations of that state. the state may be something like this. person1 mood, person2 mood. and there is an interact function. If the person1 and person2 mood is good then the interact function offers hand for handshake and the two shake hands then interact returns the new person1 mood and person2 mood. Or if the mood of one of them is bad then do some other action then return new person1 mood and person 2 mood. By placing the emphasis on the state involved the coder has reduced the problem to a set of possible transformations based on that state. In OO, an object's methods define the interface. but passing the same input to an object will not always result in the same output. Most of the time methods modify internal state and the return depends on the internal state. At any time in your code the internal state of an object must be treated as an unknown. This means the return of your objects method must also be treated as an unknown. Every object has to treat every other object like it has some sort of mental disease and could have a psychotic break any moment. It could do just about anything and if you didn't define the proper response to what they do then who knows what might happen. In FP you are encouraged to think in terms of state transformations. You are forced to consider what the state you are dealing with is and then transform that state appropriately. Since the state is of primary importance after a while FP feels more natural than OO to certain mindsets. I'm discovering that I rather like the FP mindset and miss OO less and less these days.
Glossary:
FP
Functional Programming
OO
Object Oriented
diff --git a/_content/Gmail-*Talk-2006-2-17.yml b/_content/Gmail-*Talk-2006-2-17.yml new file mode 100644 index 0000000..22c9a17 --- /dev/null +++ b/_content/Gmail-*Talk-2006-2-17.yml @@ -0,0 +1,6 @@ +title: Gmail Talk +time: 2006-02-17 10:30:50 +section: Site-News +content-type: html +content: | +One of the things Google does well is integration that makes sense. Take the new enhancements to Gmail And Google Talk. For everyone who thought that we didn't need another talk client May I humbly present: Gmail Talk beta. I knew they had something up their sleeves. And the integration is both seamless and easy. This is what makes Web Applications done right such a beautiful thing. I just log on one day and boom my wife can IM me an important message with no work on my part whatsoever.
  • No client to install.
  • No configuration option to set up.
  • No service to sign into.
Now that is the way it should be. Invisible till I need it. Just instant communication from just about any web browser in the world. Now all I need is Google Calendar and my Google Experience is complete. diff --git a/_content/Go-Html-Transform-2013-02-26-2013.yml b/_content/Go-Html-Transform-2013-02-26-2013.yml new file mode 100644 index 0000000..4dcbac6 --- /dev/null +++ b/_content/Go-Html-Transform-2013-02-26-2013.yml @@ -0,0 +1,59 @@ +title: go-html-transform an html transformation and scraping library +time: 2013-02-26 17:05:00 +section: projects +tags: + - go + - html + - css +content: | + +http://code.google.com/p/go-html-transform is my html transformation library for go. I use it as an html templating language and scraping library. It's not your typical approach to html templating but it's an approach I've really come to enjoy. HTML templating can be grouped in roughly about 3 categories. + +1. Templating languages. +1. HTML DSLs. +1. Functional transforms. + +go-html-transform is an example of that last one. The basic theory is that an html template is just data. No logic is in the template. All the logic is in the functions that operate on the template and any input data. Using the input data you can transfrom a template and then render the transformed AST back into html. This has a number of benefits. + +* Your template transforms are context aware. +* Multipass templating is just another transform. +* All your logic is expressed in real honest to goodness code not a limited templating language. In the case of go-html-transform your templating logic is actually typechecked by the go compiler. +* It's impossible to generate bad html. +* Your mocks are your templates. +* You can use an html dsl in combination with this approach as well if the dsl outputs the same AST. + +Example usage. +======= + +``` go +package main + +import ( + "strings" + "os" + + "code.google.com/p/go-html-transform/html/transform" + "code.google.com/p/go-html-transform/h5" +) + +func toSSL(url string) string { + return strings.Replace("http:", "https:", 1) +} + +func main() { + f, err := os.Open("~/file.html") + defer f.Close() + if err != nil { return } // handle errors here. + tree, err := transform.NewDocFromReader(f) + if err != nil { return } // handle errors here. + t := transform.NewTransformer(tree) + t.ApplyAll( + Trans(ReplaceChildren(h5.Text("foo"), "span"), // replace every span tags contents with foo + // turn every link and img into an ssl link + Trans(TransformAttrib("href", toSSL), "a"), + Trans(TransformAttrib("src", toSSL), "img"), + ) + + t.Render(os.Stdout) // render html to stdout. +} +``` diff --git a/_content/Google-Search:-get_cookie-perl-2005-6-17.yml b/_content/Google-Search:-get_cookie-perl-2005-6-17.yml new file mode 100644 index 0000000..44b187b --- /dev/null +++ b/_content/Google-Search:-get_cookie-perl-2005-6-17.yml @@ -0,0 +1,6 @@ +title: Google Search get_cookie perl +time: 2005-06-17 03:50:56 +section: Site-News +content-type: html +content: | +Somehow my very own tutorial on perl and CGI has made it to the top of the google search heap. I don't know how long it will stay there so I thought I'd bask while I could: Google Search: get_cookie perl Guess I really do have to write part two of that thing now. I'm also on page two of a google search for "perl cookies keys" I found these by perusing my logs. You can find some interesting things out by looking at those diff --git a/_content/Hello-2006-12-2.yml b/_content/Hello-2006-12-2.yml new file mode 100644 index 0000000..dd9a2ff --- /dev/null +++ b/_content/Hello-2006-12-2.yml @@ -0,0 +1,8 @@ +title: Hello! +time: 2006-12-02 19:09:34 +tags: + - Site-News + - Uncategorized +content-type: html +content: | +Just Julie popping in to give a big "HI!" I thought it'd been a while since there was a post here, so here is a little one. diff --git a/_content/Heres-a-gotcha-for-anyone-devel-2007-2-6.yml b/_content/Heres-a-gotcha-for-anyone-devel-2007-2-6.yml new file mode 100644 index 0000000..18e8d6b --- /dev/null +++ b/_content/Heres-a-gotcha-for-anyone-devel-2007-2-6.yml @@ -0,0 +1,6 @@ +title: Here's a gotcha for anyone developing with mod_perl and APR +time: 2007-02-06 01:31:29 +tags: Site-News +content-type: html +content: | +I had a mod perl handler that was mapped to different URL's. One worked as expected the other URL did not. Exact same code Everything was hard coded for testing so the URL had no bearing whatsoever on the code. So why on earth would one URL have an error and the other not? It took me a while but I finally figured it out. What else could have been different about the two different URL's besides the URL itself? The answer? Cookies. Specifically in this case cookies put stored by Wordpress on this particular domain. More specifically a cookie with comma's which counts as a malformed cookie and just so happens to crash Apache2::Cookie every time. Since I couldn't count on the cookie not coming back I decided to implement a workaround. So the next time you build something for the web on modperl2 using Apache2::Cookie you might just want to preprocess the Cookie header before trying to pull cookies out of it. I wrote a simple function that split the cookies out of the header removed the comma, semicolon, and any whitespace from the cookie value and then overwrote the Cookie headers before trying to retrieve the cookies. You can't always assume that the cookies you see come in are cookies you set. Cross domain cookies and Cookies used by other apps in your domain name space may be a source of trouble if you aren't careful. I learned this the hard way but at least it was on a testbed site and not something for production. Related Links: Wordpress Ticket diff --git a/_content/I-am-now-officially-on-the-marke-2006-12-14.yml b/_content/I-am-now-officially-on-the-marke-2006-12-14.yml new file mode 100644 index 0000000..c631035 --- /dev/null +++ b/_content/I-am-now-officially-on-the-marke-2006-12-14.yml @@ -0,0 +1,6 @@ +title: I am now officially on the market as a freelancer. +time: 2006-12-14 00:36:30 +section: Site-News +content-type: html +content: | +So contact me for a job now before I become full time employed again. For that matter feel free to contact me about full time employment too. diff --git a/_content/I-changed-my-permalink-structure-2006-1-19.yml b/_content/I-changed-my-permalink-structure-2006-1-19.yml new file mode 100644 index 0000000..3cd03a1 --- /dev/null +++ b/_content/I-changed-my-permalink-structure-2006-1-19.yml @@ -0,0 +1,6 @@ +title: I changed my permalink structure +time: 2006-01-19 17:49:54 +section: Site-News +content-type: html +content: | +For two reasons. One it will make capturing a static backup of my site easier in the future using a tool like wget. Two I'm using a new analytics tool and this will help me track visitor traffic in much more meaningful ways. I don't think any links you might have saved before will stop working but just in case it does... Now you know why. diff --git a/_content/I-dont-usually-do-this-2006-4-5.yml b/_content/I-dont-usually-do-this-2006-4-5.yml new file mode 100644 index 0000000..a75dd59 --- /dev/null +++ b/_content/I-dont-usually-do-this-2006-4-5.yml @@ -0,0 +1,6 @@ +title: I don't usually do this +time: 2006-04-05 10:54:02 +section: Site-News +content-type: html +content: | +But I have clients on Network Solutions and they must have just about the worst service I've ever seen for DNS. and to top it off their servers go down way to often for the price they charge. Case In Point That was just the latest outage. So if any of you are thinking of getting DNS or Domain Names from them. Don't. Just a warning but I really haven't been impressed by them. diff --git a/_content/I-upgraded-to-WP-202-finally-2006-4-18.yml b/_content/I-upgraded-to-WP-202-finally-2006-4-18.yml new file mode 100644 index 0000000..61da0ee --- /dev/null +++ b/_content/I-upgraded-to-WP-202-finally-2006-4-18.yml @@ -0,0 +1,6 @@ +title: I upgraded to WP 2.0.2 finally +time: 2006-04-18 15:09:00 +section: Site-News +content-type: html +content: | +I bit the bullet and upgraded finally. And might I say the upgrade was a snap. Not a single problem while doing so. I am impressed. diff --git a/_content/Im-about-to-make-your-day-2006-6-22.yml b/_content/Im-about-to-make-your-day-2006-6-22.yml new file mode 100644 index 0000000..72f1f69 --- /dev/null +++ b/_content/Im-about-to-make-your-day-2006-6-22.yml @@ -0,0 +1,6 @@ +title: I'm about to make your day +time: 2006-06-22 15:17:14 +section: Site-News +content-type: html +content: | +If you have always wanted the functionality of GNU Screen for your win32 box but couldn't get the win32 port to do the detach/reattach portion correctly then your not alone. And thanks to the power of open source some guy created a stripped down version called dtach that doesn't have all the fancy stuff. This version just so happens to compile and run perfectly on cygwin. It might even do it correctly on just a mingw32 setup. What this means is that you can now run detached terminal sessions on your windows box. I've provided the binary for you to use if you're interested. dtach Example shortcut to use it diff --git a/_content/Im-trying-out-a-new-theme-2005-12-8.yml b/_content/Im-trying-out-a-new-theme-2005-12-8.yml new file mode 100644 index 0000000..775172f --- /dev/null +++ b/_content/Im-trying-out-a-new-theme-2005-12-8.yml @@ -0,0 +1,6 @@ +title: I'm trying out a new theme +time: 2005-12-08 22:58:34 +section: Site-News +content-type: html +content: | +I may be modifying it a bit in the next few days probably to widen it's total size since I'd like more width for the content section but I like the overall look and colors diff --git a/_content/Inception:-The-Distributed-Commu-2007-1-3.yml b/_content/Inception:-The-Distributed-Commu-2007-1-3.yml new file mode 100644 index 0000000..2c9decf --- /dev/null +++ b/_content/Inception:-The-Distributed-Commu-2007-1-3.yml @@ -0,0 +1,9 @@ +title: Inception - The Distributed Community API +time: 2007-01-03 22:12:12 +tags: + - APIs + - BrickLayer + - Software-Development +content-type: html +content: | +I have been noodling an idea lately that I think I will begin to implement in Bricklayer. A distributed MySpace/FaceBook of sorts. Think of it as everything you like about a Community Website with none of the Myspace pains. Your profile and content will be yours and under your control. But the network will still be there. It's distributed because its a network of individual unrelated sites. A Blog, Forum or Discsussion board perhaps. It will allow Disparate sites to network using similar technologies to trackback pings with less spam because a request for friend status with an identity will have to be explicitely allowed. In Theory with this API joining a community like Myspace would be as simple as requesting Friend Status with a Service Website. Leaving would be as simple as revoking the status with the Service Website. Each way you still keep the identity and Content and moving it from Service to Service or using it in Multiple Services would be simple and easy. My initial Design Notes can be found here. diff --git a/_content/Information-Hook-Up-2005-6-22.yml b/_content/Information-Hook-Up-2005-6-22.yml new file mode 100644 index 0000000..4339441 --- /dev/null +++ b/_content/Information-Hook-Up-2005-6-22.yml @@ -0,0 +1,6 @@ +title: Information "Hook-Up" +time: 2005-06-22 05:08:26 +section: Site-News +content-type: html +content: | +Are you the Information "Hook-Up" for your family and friends? Do you know the ins and outs of internet research? If someone needs something are the guy who can get it for them? Every good prison movie has the guy who can get you anything. Need a picture of Raquel Welch to hide that unsightly hole in your wall? Got a yearnin for some McDonalds food? The prison hook-up guy can get it for you. IT departments and Office area's have a similar guy. Need to find a pdf printer driver for free? I can get ya that. Need to find information about government guidelines regarding the storage and sharing of medical data? I can get ya that. Want to know the mating habits of wild geese? Yep, You guessed it. I can get ya that. Just call me your Information Hook-Up. Actually those aren't even obscure items. How about this one? Need to know an easy way to get at the data in STI's School Management application? Yep I got that too. Man... I gotta get a life. diff --git a/_content/Insecurity-in-Open-Source?---A-R-2004-2-15.yml b/_content/Insecurity-in-Open-Source?---A-R-2004-2-15.yml new file mode 100644 index 0000000..515a65d --- /dev/null +++ b/_content/Insecurity-in-Open-Source?---A-R-2004-2-15.yml @@ -0,0 +1,6 @@ +title: Insecurity in Open Source? - A Rebuttal +time: 2004-02-15 00:00:29 +section: Site-News +content-type: html +content: | +Mr. A. Russel Jones is betraying a remarkable amount of dim thinking in his article "Open Source Is Fertile Ground for Foul Play" The following is my step by step rebuttal of his arguments: Mr. Jones begins by saying: "This will happen because the open source model, which lets anyone modify source code and sell or distribute the results, virtually guarantees that someone, somewhere, will insert malicious code into the source." While I am sure many people try to do so such code doesn't get very far. Peer Review is a very powerful process. It has been repeatedly demonstrated to be superior to any commercial variant in identifying a threat. Jones goes on to give supporting statements for his premise shown above. His first supporting argument is Open source advocates rightfully maintain that the sheer number of eyes looking at the source tends to rapidly find and repair problems as well as inefficiencies—and that those same eyes would find and repair maliciously inserted code as well. Unfortunately, the model breaks down as soon as the core group involved in a project or distribution decides to corrupt the source, because they simply won't make the corrupted version public. Therefore, security problems for governments begin with knowing which distributions they can trust. Indeed? You are correct knowing which distributions to trust is a part of it. You forgot one other thing though. If the corrupted version isn't made public then it isn't really open source is it? Say said company did make a maliciouse version of say, Linux, then refused to make the source code available. Big warning flag there. The distro simply won't sell. If they release a fake version of the source code Peer Review suddenly kicks in. Someone tries them out. The notice that the kernel Binary that comes with the distro doesn't match up with a their compiled kernel. They raise the alarm. Serious Legal action takes place the again the Distro is no longer. Now take that same scenario in a Closed Source application. You don't have a way to know whether their is malicious code in the app. You have no clean apps to compare it against. You have no peer review. In short Open Source has an additional layer of protection in this scenario which Closed Source does not have. There are a number of very trustworthy distributions out there which have proved themselves. Stick with them and you will be fine.Jones goes on to say: Open source software goes through rigorous security testing, but such testing serves only to test known outside threats. The fact that security holes continue to appear should be enough to deter governments from jumping on this bandwagon, but won't be. Worse though, I don't think that security testing can be made robust enough to protect against someone injecting dangerous code into the software from the inside—and inside, for open source, means anyone who cares to join the project or create their own distribution. This one is either just outright lying, or an example of a massive lack of understanding. Such testing does not only work against known outside threats. It will also uncover malicious code in the source. Testers in open source don't just download the binary and run it. They download the source, look it over, compile it, and then run it. They may also do the same with the binary which only serves to point out differences in the binary versus their compiled version. The system of Peer review makes it practically impossible to hide parts of your source from the public without them knowing about it. Someone will notice believe me. Jones doesn't of course stop here though: Third, an individual or group of IT insiders could target a single organization by obtaining a good copy of Linux, and then customizing it for an organization, including malevolent code as they do so. That version would then become the standard version for the organization. Given the prevalence of inter-corporation and inter-governmental spying, and the relatively large numbers of people in a position to accomplish such subterfuge, this last scenario is virtually certain to occur. Worse, these probabilities aren't limited to Linux itself, the same possibilities (and probabilities) exist for every open source software package installed and used on the machines. He seems to be saying here that the possibility exists that someone in the targetted organization's IT department could use their position to use modified Open Source apps maliciously. But seriously if you hired malicious IT personnel, Open Source is the least of your problems. Open Source can hardly be held responsible for your poor hiring practices. Such a person is a danger whether your running Windows and Office, or Linux and OpenOffice. He could release a Windows virus on the network, or write malicouse VBA code for the office suite, just as easily as he could distribute modified Open Source Apps. None of Jones suppositions hold any weight and as such I fear we must file his argument under file 13. diff --git a/_content/Internet-Daily:-BellSouth-wants-2006-1-18.yml b/_content/Internet-Daily:-BellSouth-wants-2006-1-18.yml new file mode 100644 index 0000000..1408bcd --- /dev/null +++ b/_content/Internet-Daily:-BellSouth-wants-2006-1-18.yml @@ -0,0 +1,6 @@ +title: Internet Daily - BellSouth wants new Net fees +time: 2006-01-18 11:33:28 +section: Site-News +content-type: html +content: | +The Net is buzzing about BellSouth and it's rather strange view on internet access. Some details can bee seen at the article below. Market Watch - BellSouth Story Techdirt has had some very good commentary also: Techdirt - BellSouth It looks like someone has forgotten what exactly their product is and what the value of that product is. If you are a BellSouth customer you might do well to take a close look. An ISP sells just one thing access. And they sell it to just one market. The accessor. Google isn't using BellSouth's pipes. BellSouth's DSL customer is. The same goes for yahoo, AOL, Apple's ITunes and so on. The subscriber to the ISP requests the content. This is a PULL not a PUSH relationship. BellSouth better get it's head in the game or They are going to find they've lost their market because they forgot what product they were selling. diff --git a/_content/Is-NIH-really-so-bad?-2010-3-2.yml b/_content/Is-NIH-really-so-bad?-2010-3-2.yml new file mode 100644 index 0000000..630e80d --- /dev/null +++ b/_content/Is-NIH-really-so-bad?-2010-3-2.yml @@ -0,0 +1,25 @@ +title: "Is NIH really so bad?" +time: 2010-03-02 +timeformat: "2006-01-02" +tags: + - clojure + - molehill + - nih + - web-framework +content: | +This site is now powered using molehill. I say this because for one it's nice to announce this sort of stuff and two because it leads into the main content of the article. Molehill is a static site generator and commandline content management system. It uses vcs to track changes and can build a site from files containing content on a filesystem. Because molehill is a site generator and not a web application it doesn't serve files up as part of it's normal operation. But while developing the application I found myself wishing I could temporarily serve the content from a server to see what it looks like and behaves like in that environment. So I needed an embedded webserver. Since this wasn't meant for heavy extended use and mostly only for testing a molehill site I didn't want anything fancy or complicated to use. My requirements were simple. + + 1. It had to serve content from a directory + 2. It had to allow you to specify the port + 3. It shouldn't require a lot of dependencies + 4. It shouldn't be hard to add. + +Now Molehill is written in clojure, which means theoretically it has access to all the wonderful libraries java has accumulated over the years. So finding a small simple embedded webserver should be easy right? I looked at: + + 1. Jetty (not easy to setup or use) + 2. BareHTTP (looked like it did what I needed but the code didn't actually work) + 3. Commanche (not easy to setup or use) + +Finally I found a less than 100 line clojure gist on github that did exactly what I needed. When I twittered about the find and my frustration looking for a java option I received responses ranging from: "Cool that didn't take long" to "Why are you so down on Jetty?" The latter opinion seemed to be that Jetty was fine so why invent another solution. One said I was crazy since his production jetty xml config file was only 50 lines long. Keep in mind that I was wanting to launch this from commandline with only a single command line parameter, the port, for configuration. I certainly didn't want to have to write an xml file every time I started a new site. Why put the user through that? Granted I could probably have programmatically configured the Jetty server using clojure code and not needed the file, but again why should I have to? In my mind the extra dependency, lack of simplicity, and general uckiness of using jetty for this wasn't a good fit. What I needed was something that fit in a single clojure file had no dependencies and did simple webserving. For that the github gist was exactly what I needed. It improved things for the user and the coder and in my mind thats a win win. + +I agree that trying to invent everything yourself can be prohibitively time consuming and fraught with unexpected problems. Spending a little time looking for something that has already solved your particular problem is a good investment. But making it a religious belief can be just as prohibitive. Jetty is very good as an embeded java http servlet container and server. It does a fantastic job at this and if you need that in your app it's a lightweight way to go. But if you all you want is a temporary static site server with only two configuration options then jetty suddenly gets in your way. Sometimes the only solution is your own solution and that's ok. So the next time someone announces they had to build something because your favourite library/application/tool didn't do what they needed don't yell at them. Congratulate them on finding an unfullfilled need and wish them luck in their endeavor. It might not be NIH It might instead be NIY (Not Invented Yet) and you just didn't realize it. diff --git a/_content/Learning-Erlang-2007-9-17.yml b/_content/Learning-Erlang-2007-9-17.yml new file mode 100644 index 0000000..fff664f --- /dev/null +++ b/_content/Learning-Erlang-2007-9-17.yml @@ -0,0 +1,20 @@ +title: Learning Erlang +time: 2007-09-17 03:21:21 +tags: + - Site-News + - erlang + - etap + - Open-Source + - OSS-Apps + - Testing +content: | +I have taken on the task of learning erlang. I was trying to decide between +learninng Haskell, OCaml, or Erlang. OCaml, I decided against since it had too +close a similarity to C and I wanted to really stretch myself. +Haskell and Erlang both fit that bill however I found the Erlang +Documentation to be far better for someone completely new to the functional +programming world. Haskell's idea of a tutorial tried to cover too many +concepts at once and took too long to get to the hands on stuff. Also erlang +offered the opportunity to learn Distributed programming concepts along the way +so erlang it was. You can see my first erlang project +[etap here](http://github.com/zaphar/etap). diff --git a/_content/Linus-speaks-out-about-DRM-GPLV3-2006-2-3.yml b/_content/Linus-speaks-out-about-DRM-GPLV3-2006-2-3.yml new file mode 100644 index 0000000..3ae16cc --- /dev/null +++ b/_content/Linus-speaks-out-about-DRM-GPLV3-2006-2-3.yml @@ -0,0 +1,6 @@ +title: Linus speaks out about DRM/GPLV3 +time: 2006-02-03 11:36:11 +section: Site-News +content-type: html +content: | +If you've been floating around the net for any significant amount of time you've probably heard about DRM. And if you like Open Source you've probably heard a little about the GPLV3 license brouhaha. No doubt you're even wondering what exactly it all means. Well Linus sums it up pretty well. The key points here are that DRM is primarily about using Valuable Security Technologies in unintended ways. The same possibilities that make DRM useful help make systems more secure. The way to Fight DRM is not fighting the technology. It's protecting the content. If you dislike DRM then make sure your content can never be used in a DRM protected work. Protect your content don't fight the technology. Open Source is winning the Software licensing battle because it produces Quality Products under a less restrictive environment for use. Open Content can do the same. That's something the EFF seems to have forgotten. Lets hope the artists start paying attention before the barrier to entry becomes too high. diff --git a/_content/Look-Mom-I-Pimped-My-Desktop-2007-2-3.yml b/_content/Look-Mom-I-Pimped-My-Desktop-2007-2-3.yml new file mode 100644 index 0000000..95d79dc --- /dev/null +++ b/_content/Look-Mom-I-Pimped-My-Desktop-2007-2-3.yml @@ -0,0 +1,6 @@ +title: Look Mom!! I Pimped My Desktop!!! +time: 2007-02-03 01:22:16 +section: Site-News +content-type: html +content: | +Thanks to the wonders of open source. I now have a fully OpenGL Accelerated Desktop. Complete with realtime window opacity, An Expose like interface a totally awesome task switcher, wobbling windows, Window animations. But the totally blow me away feature was window thumbnailing. if I have a video running and minimize the window then hover over it in the task bar then the thumbnail shows the running video!! I'm like a kid in a candy store. Now most people would be like sure who cares but the task switcher and Expose like feature are incredibly useful. Beryl and XGL Rock!!! Screenshots to come. diff --git a/_content/Looking-for-a-laptop-2007-3-23.yml b/_content/Looking-for-a-laptop-2007-3-23.yml new file mode 100644 index 0000000..793947f --- /dev/null +++ b/_content/Looking-for-a-laptop-2007-3-23.yml @@ -0,0 +1,6 @@ +title: Looking for a laptop +time: 2007-03-23 21:40:31 +section: Site-News +content-type: html +content: | +I'm in the market for a laptop. I'm either going to be buying a refurbished Mac. Or I'm going to buy a laptop certified for linux. What I want in a laptop if I get a linux one is: already installed and configured wifi. Nvidia graphics chipset with dedicated memory at least a dvd drive dvdrw preferred at least 512mb of memor 1gb preferred at least 80 gb hdd 100 perferred 10/100 ethernet port If anyone has recommendations or even one to sell let me know. Addendum: I have purchased the laptop. I bought a sony Viao VGN AR320E. diff --git a/_content/MPAAs-horror-show-Critics-not-2006-2-28.yml b/_content/MPAAs-horror-show-Critics-not-2006-2-28.yml new file mode 100644 index 0000000..24969d3 --- /dev/null +++ b/_content/MPAAs-horror-show-Critics-not-2006-2-28.yml @@ -0,0 +1,6 @@ +title: MPAA's horror show. Critics not amused +time: 2006-02-28 10:30:51 +section: Site-News +content-type: html +content: | +Hollywood Tech group recoils in horror as Analog Hole Plug is proposed. found via: TechDirt diff --git a/_content/Marzhills-new-Home-2005-4-24.yml b/_content/Marzhills-new-Home-2005-4-24.yml new file mode 100644 index 0000000..9d6b7bf --- /dev/null +++ b/_content/Marzhills-new-Home-2005-4-24.yml @@ -0,0 +1,6 @@ +title: Marzhill's new Home +time: 2005-04-24 08:46:38 +section: Site-News +content-type: html +content: | +As some of you noticed the site had some downtime. This occured because I was moving the server and had some problems with DNS and then Server Hardware. (More on that in a moment) I apologize for that if it caused you any trouble. I had been noticing a growing lack of space with my other provider, so I was looking for a different solution. It presented itself through work. My Boss actually gave me a server. Quad Pentium II processors 2 gig of Ram and 30 gigs of Raid 5 disk space. He also offered to allow me to host it off his T1. All he asked was that I set up a place for the other employees to host their own websites too. This was perfect. I had a decent server with enough space to grow into and full control over the configuration. It was like having something from ServerBeach but without the cost. I look forward to using the server as a development platform to work on some experiments in Web Application development I have had in mind for some time. I hope you will enjoy my chronicling those experiences here. Unfortunately when I got the server started I was unaware that the scsi cable had a problem. It ended up trashing the raid container and I had to rebuild. This meant that after the DNS had replicated the site was down completely since the server went down. Luckily I was able to rebuild and we are now running debian sarge with everything I need to be a webhosting provider to my coworkers, immediate family, and myself. diff --git a/_content/Measuring-Developer-Skill-2015-05-09.md b/_content/Measuring-Developer-Skill-2015-05-09.md new file mode 100755 index 0000000..b19e7ac --- /dev/null +++ b/_content/Measuring-Developer-Skill-2015-05-09.md @@ -0,0 +1,65 @@ +Measuring Developer Skill +========================= + +A recent article about a pycon keynote by Jacob Kaplan Moss brought an +interesting question to my attention. How exactly would one measure +developer skill. It's interesting because the question itself is hard +to define properly. For one thing there are a lot of dimensions to +measure skill in. And developers hold many different roles in a team. +DevOps, Architects, Maintainers, BugFixers, Automators. Each of these +requires different skillsets. My brother who is also a developer made +the comment that skill measurements are useless without an +expectation. Against what benchmark are we measuring the skill? + +I'm not at all sure there is any answer to this question that even makes +sense. But perhaps we can make some useful progress if we narrow the scope +a little. + +What if we just tackled the question of API design? + +Measuring a single dimension of coding skill. +--------------------- + +API quality is one dimension on which you could measure developer +skill. For a measurement of API quality to be useful though it has to +be more than some elusive abstract idea of elegance or beauty +though. What would make a truly useful measurement of API quality? + +When designing an API one of the commonly held goals is the principle +of least surprise. You want to build an API that makes sense to the +user and isn't bewildering in it's behavior. This seems like it +should somehow be measurable right? But how do you measure how +"surprising" an api is? + +Another commonly held goal is the to reduce complexity. An API's +purpose is to hide a lot of the complexity in a problem and make it +comprehensible to the user. + +Both of these ideas at there core are centered around the concept of ease of use. +What we need is a way of quantifying Ease of Use in a measurable form. + +# WTF/dev/day # + +Really the reason we care about ease of use is because it reduces +frustration when we consume an API. So the *number* we are trying to +reduce is the count of WTF's devs encounter when working with an +API. We could simply measure how many times a developer encounters a +WTF when using an API. But this is of course inherently biased. We +need a way to control or reduce that bias if we care about creating a +measurement that is useful to an industry. + +What factors do we need to control for in a measurement of coding skill? +------------------ + +Assuming we are measuring in a single dimension we are going to have to decide +exactly what factors are important and what factors should be controlled for in +our measurements. + +People with stockholm syndrome are obviously going to experience less +wtf's per day than someone new to an API. The quality of the API's +documentation also affects the wtf count. Which brings up a +question. Is the documenation a part of the API? should it be a +factor in our measurement or should it be restricted to code alone? + +We will encounter hundreds of these factors for any dimension we +attempt to measure. Collecting useful data here is a hard problem. diff --git a/_content/Measuring-Developer-Skill-2015-05-09.yaml b/_content/Measuring-Developer-Skill-2015-05-09.yaml new file mode 100755 index 0000000..dd338d9 --- /dev/null +++ b/_content/Measuring-Developer-Skill-2015-05-09.yaml @@ -0,0 +1,10 @@ +title: Measuring Developer Skill +author: Jeremy Wall +time: 2015-05-09 +timeformat: 2006-01-02 +content-type: markdown +tags: + - measurement + - skill + - developers + - hiring diff --git a/_content/MetaBase-2006-4-3.yml b/_content/MetaBase-2006-4-3.yml new file mode 100644 index 0000000..0b16a11 --- /dev/null +++ b/_content/MetaBase-2006-4-3.yml @@ -0,0 +1,6 @@ +title: MetaBase +time: 2006-04-03 21:40:04 +section: Site-News +content-type: html +content: | +Have you ever wished your CMS seamlessly handled multiple types of content and multiple ways of organizing, storing, and presenting them? Well so have I. So now after 2 years of working on Bricklayer off and on I'm finally going to build something useful with it. I'm a man with many interests and I'd like to store and share them all. MetaBase will be designed with that in mind. Whether it's an image, essay, diary entry, or tutorial or howto, MetaBase will handle it and display it in an appropriate way. It will also allow me to manage it using such exciting technologies as metadata, tagging, and hiearchical ranking. This should be fun. diff --git a/_content/MetaData-and-Database-design-2005-9-27.yml b/_content/MetaData-and-Database-design-2005-9-27.yml new file mode 100644 index 0000000..17bedb1 --- /dev/null +++ b/_content/MetaData-and-Database-design-2005-9-27.yml @@ -0,0 +1,8 @@ +title: MetaData and Database design +time: 2005-09-27 00:36:36 +tags: + - Data + - Software-Development +content-type: html +content: | +Recognizing the difference between your Data and your Metadata can go a long way toward keeping your data formats extensible. Theoretically keeping your data generic and using metadata to describe it can allow for much greater flexibility in your application's design. Planning for, and accounting for, the ability to add more metadata on the fly can allow you a much greater capacity for growth in the types of data your application can handle. I'm experiencing this in a current project in fact. The company in question is growing and is faced with a need to change their current application to allow for that growth. A massive refactoring of the application is going to be needed. They will have to be able to add new "products" (otherwise known as data) to the application and present more ways for customers to get access to said product. A metadata based design in their data format will give them that kind of flexibility. diff --git a/_content/Mnesia-and-Schema-upgrades-2009-4-16.yml b/_content/Mnesia-and-Schema-upgrades-2009-4-16.yml new file mode 100644 index 0000000..a02cb90 --- /dev/null +++ b/_content/Mnesia-and-Schema-upgrades-2009-4-16.yml @@ -0,0 +1,13 @@ +title: Mnesia and Schema upgrades. +time: 2009-04-16 19:41:41 +tags: + - Site-News + - databases + - erlang + - mnesia + - upgrades +content-type: html +content: | +I had an epiphany or sorts the other day. While working on iterate I realized that I was going to need to upgrade the schema of my mnesia tables. Schema changes on databases often mean bringing down the application. At least they usually have in my experience anyway. I hate to upgrade databases. That is I did until I met erlang and mnesia. The issue is that my mnesia schema for some tables was using a user provided name for the primary key. This was fine while it was just a prototype but now it was time for it to grow up. The record describing the table had to change and all the records in a table had to change. Now no one is really using this right now so I had a choice.
  • Blow away the database and rebuild it.
  • Be all erlang'y and stuff and do it live.
Wait a minute did he just say "live"? You can't update the schema and convert all the records live. That's crazy! Ahhh but this is erlang we do things different here. Witness the glory of iterate's db_migrate_tools: + +Notice, the transform_stories function. It defines an anonymous fun it uses to transform the mnesia table with. Currently this function only has one signature. There is no reason it can't have more thoug. Since, erlang allows multiple signatures for anonymous functions, I can specify a signature for the next version of the mnesia table if/when it changes again. Here's the epiphany part. I can keep updating that fun with more signatures for each version of the table. Theoretically ,if I were to end up with a table that had records of multiple different historical types in it, I would be able to use this one transform function to get them all updated to the new record type. And I can do this all live without taking down the database or app. I can ship each iterate version with a module capable of updating the database live from any previous version. Now that's power that's useful. Try doing that with another platform. diff --git a/_content/Mod_Perl-20---A-Real-World-Guid-2006-4-22.yml b/_content/Mod_Perl-20---A-Real-World-Guid-2006-4-22.yml new file mode 100644 index 0000000..73fafcf --- /dev/null +++ b/_content/Mod_Perl-20---A-Real-World-Guid-2006-4-22.yml @@ -0,0 +1,6 @@ +title: Mod_Perl 2.0 - A Real World Guide - part I +time: 2006-04-22 20:32:48 +section: Site-News +content-type: html +content: | +Right now there is a shortage of really easy to understand documentation on using mod_perl to write web applications. There are a lot of examples on using it to rewrite URI's, redirect output, run CGI apps unaltered and even turning Apache into an email over http protocol server. Those are all really wonderful uses, but I want to build a website. So how do you go about doing that? Especially if you aren't using CGI.pm on the backend. This article and others after will focus on that topic. The tutorial is for mod_perl 2 on Apache2. It works equally well on windows or Linux as far as I can tell. I shall be assuming you already have or can find out how to get mod_perl 2 and apache2 on your server. Basically you will be following me as I peek into the internals of making mod_perl useful. Lets start with a simple script to take a look at the internals of what mod_perl lets you do. Here is our script:
 package mod_perl_report; use strict; use warnings; use Apache2::RequestRec (); use Apache2::RequestIO (); use Apache2::Const -compile => qw(OK); sub handler { my $r = shift; my $report; $r->content_type('text/plain'); $report .= "server: ".$r->server()."\\n\\n"; $report .= "hostname: ".$r->hostname()."\\n"; $report .= "user: ".$r->user()."\\n"; $report .= "unparsed uri: ".$r->unparsed_uri()."\\n"; $report .= "uri: ".$r->uri()."\\n"; $report .= "filename: ".$r->filename()."\\n"; $report .= "pathinfo: ".$r->path_info()."\\n"; $report .= "request time: ".$r->request_time()."\\n"; $report .= "request method: ".$r->method()."\\n"; $report .= "request string: ".$r->args()."\\n\\n"; $report .= "cookies: ".$r->headers_in->{Cookie}."\n"; $report .= "status: ".$r->status()."\\n"; $report .= "status line: ".$r->status_line()."\\n"; $report .= "notes: ".$r->notes()."\\n"; $report .= "\n\npost data: ->|".read_post($r)."|< -"; # $report .= "\n\nmodifying variables now: \\n\\n"; print "mod_perl 2.0 Debugging output:\\n\\n"; print $report; return Apache2::Const::OK; } sub read_post { my $r = shift; my $buffer; my $data; while ($r->read($buffer, 1000)) { $data .= $buffer; } return $data; } 1; 
You will need to save it into a file called mod_perl_report.pm and then tell mod_perl where it is. To do that you will need some configuration directives in your apache2 httpd.conf file. Now in my case I created a folder in my apache server's root directory (whatever you set ServerRoot to in the conf file) called mod_perl. Then I created a file called mod_perl_prep.pl in the apache configuration directory with my common use statements. In this file I also had the line
use lib qw(mod_perl);
so that mod_perl will know where my handler is. I call the mod_perl_prep.pl script from the apache configuration file like so:
PerlRequire conf/mod_perl_prep.pl
and finally since this is a development machine I save myself some time by adding this line directly underneath:
 # comment out the following line for production use. PerlInitHandler Apache2::Reload 
That line tells mod_perl to reload any modules I use when they change so I don't have to keep restarting apache to see my changes. Believe me you will want this on your development machine cause the restarting gets really old really fast. Now we are ready to tell Apache when to call our debugging report handler. At the bottom of your configuration file add the following section:
  SetHandler perl-script PerlResponseHandler mod_perl_report  
Now Apache knows that when it sees the /report URI after the hostname it needs to call my mod_perl_report handler. So lets take a look at that handler right now and see what it does. This module illustrates all the most useful pieces of the Apache2::RequestRec object that I have so far been able to figure out. It starts out with our modules package declarations and any use statements we will need to do the work we are planning to do.
 package mod_perl_report; use strict; use warnings; use Apache2::RequestRec (); use Apache2::RequestIO (); use Apache2::Const -compile => qw(OK); 
This is all the code we will need in order to use the variouse Apache2 mod_perl interfaces for our web app. Now to be a full fledged working http request handler we need one last detail. All mod_perl handlers require a handler subroutine like so:
sub handler { my $r = shift; my $report; $r->content_type('text/plain'); #handler code goes here # print "mod_perl 2.0 Debugging output:nn"; print $report; return Apache2::Const::OK; } 
There are several pieces to this handler that are important. First the name. It has to be called handler. If mod_perl can't find the handler sub in your module it can't use it. Second the my $r = shift; This is our mod_perl Apache2::RequestRec object for the request we are handling. Lastly the return statement. You will in almost all cases be returning one of two Apache2::Const constants. OK or DECLINED. OK tells mod_perl that your handler is accepting responsibility for this request. DECLINED tells mod_perl that your handler is declining responsibility for this request. You can import all of these constants and many more with the use Apache2::Const -compile ->qw(); statements. See the Apache2::Const documentation for a complete list. Now lets get down to the real nitty gritty. I wanted this script basically to be my "hello world" for mod_perl. It needed to demonstrate how to set up and use mod_perl for a real web app. To that end there were several things I needed it to do. I needed it to be called when I hit a particular url. I needed to output something my browser would understand. And finally I wanted it to be useful information. As I said before there is somewhat of a lack of truly useful information online for this kind of thing. For instance if I have a form that POST's data to a page how do I retrieve it. CGI.pm handles it for you of course. But I really like to know exactly what kind of environment I'm working in. Just chalk it up to that hacker spirit coming out. After paging through a great deal of documentation on the perl website I finally narrowed it down to the pieces that looked the most useful.
  • The Apache2::RequestRec object
    1. $r->hostname()
      The Hostname our server is being called by.
    2. $r->unparsed_uri()
      The unparsed URI of our request. This is the whole shebang including any GET strings and path information
    3. $r->uri()
      The URI our handler is handling. This should correspond to whatever was in our location section of the httpd conf file. In this case /report
    4. $r->filename()
      This is the exact path our URI maps to on the server harddrive.
    5. $r->path_info()
      This is the portion of the URI that comes after our uri from above with the get string excluded if there was one
    6. $r->request_time()
      This is the timestamp for the request
    7. $r->method()
      This is the Request method it could be any one of GET, POST, PUT... and so on. Even one you made up if you have a client that can send it. And your handler takes over the request at the correct time.
    8. $r->args()
      This is the request string. Basically anything that appears after a question mark in the URI. If your handler was called at the right spot in Apache's request handling then you could potentially capture both GET and POST data sent from a client in the same request. I'll go into that more later.
    9. $r->headers_in->{Cookie} This is the string from the cookie headers. headers_in basically gives you a hash like tied interface to the headers from the client.
    10. $r->status()
      This is the current response code for the request (eg. 200, 404, 500). You can also set the response code here.
    11. $r->status_line ?? undetermined yet. If you know more then leave a comment
    12. $r->notes() ?? undetermined yet. If you know more then leave a comment
    13. $r->read($buffer, $len) read from the request buffer. Used to retrieve any posted data. Use is much like perls core read() function. It is used in our read_post subroutine to read any post data that might be present.
You can hit /report on the server with your browser to see exactly what these methods actually return. Use an html form that posts to this page to see how that works too. That's probably enough to cover in this post. In future posts I will talk about using a CGI module like CGI_Lite in this environment and some of the neat tricks mod_perl lets you do that no other web development environment allows. diff --git a/_content/Mod_Perl-20---First-in-a-series-2006-4-22.yml b/_content/Mod_Perl-20---First-in-a-series-2006-4-22.yml new file mode 100644 index 0000000..aabad0d --- /dev/null +++ b/_content/Mod_Perl-20---First-in-a-series-2006-4-22.yml @@ -0,0 +1,9 @@ +title: Mod_Perl 2.0 - First in a series +time: 2006-04-22 20:40:00 +tags: + - APIs + - Perl + - Software-Development +content-type: html +content: | +I have put up a new page on mod_perl development on the site. I hope it is of use to someone. If it isn't well at least I still have my own notes :-) You can find it here: Mod_Perl 2.0 - A Real World Guide - part I diff --git a/_content/Mod_Perl-20---Second-in-a-serie-2006-5-11.yml b/_content/Mod_Perl-20---Second-in-a-serie-2006-5-11.yml new file mode 100644 index 0000000..9c34b36 --- /dev/null +++ b/_content/Mod_Perl-20---Second-in-a-serie-2006-5-11.yml @@ -0,0 +1,6 @@ +title: Mod_Perl 2.0 - Second in a series +time: 2006-05-11 11:24:43 +section: Site-News +content-type: html +content: | +My next mod_perl article is up: Mod_Perl 2.0 Writing a Useful Handler Read it while it's hot. diff --git a/_content/Mod_Perl-20---writing-a-useful-2006-5-11.yml b/_content/Mod_Perl-20---writing-a-useful-2006-5-11.yml new file mode 100644 index 0000000..2e59ae6 --- /dev/null +++ b/_content/Mod_Perl-20---writing-a-useful-2006-5-11.yml @@ -0,0 +1,6 @@ +title: Mod_Perl 2.0 - writing a useful handler +time: 2006-05-11 11:15:09 +section: Site-News +content-type: html +content: | +In the last article we looked at all the tools mod_perl gives us when using it. Now lets take a look at some of the things you can do with those tools. I will be demonstrating what use those tools can be in a real world app. The first thing to do is create our skeleton handler for the app. You can use the handler we put together in the previous article for this. Just replace the contents of the handler subroutine with the code we will be generating here. I'm using my bricklayer framework to demonstrate this but the lessons apply equally well to most other applications. The first thing we need to do is load any libraries we might need. In my case this is just the bricklayer module.
package mod_perl_wrapper; use strict; use warnings FATAL => 'all'; no warnings 'redefine'; use lib "/data/Jeremy/workspace/BrickLayer_Main"; #Apache2 Mod_Perl2 libraries use Apache2::RequestRec (); use Apache2::RequestIO (); use Apache2::Filter (); use Apache2::Const -compile => qw(DECLINED OK); #the apache2 constants we need # Bricklayer Library use BrickLayer (); #my bricklayer module sub handler { } 1;
There a several things to note about the code above. First is the use lib. mod_perl looks in the usual places for your libraries. You might think that just a simple . in the @INC array will make it look in the same location as your handler. However in a mod_perl environment ./ actually refers to the Apache root directory. Therefore it's always a good idea to explicitely tell it any nonstandard library locations in your handler. Also note the mod_perl2 libraries I loaded. These are the most common libraries your app will need so I included them here. Now lets get started building that app shall we? Our handler has several jobs to do.
  1. examine the URL string to see what page was requested
  2. examine and request variables and modify them appropriately
  3. retrieve any post information and send that to the appliaction or provide a way for that application to retrieve it
  4. run the application
To this end we need to know the path_info(), the args(), the contents of read() and be able to pass that information to our application. So lets take a look at accomplishing those tasks now. First we need to see what page was requested. We can get this information from the path_info method of our Apache2::RequestRec object. Once we have retrieved it we can parse out the needed information. In my case I want to look for *.txml files and turn those into a page request in the request string. This will allow me to hit my template files directly without anyone knowing or caring if I use a perl script to parse them first. It makes my URL's friendlier and helps in site tracking. To that end I need to take any .txml files requested and prepend or append them to my request string. Here is an example of just that.
#$_[0] is our Apache2::RequestRec object my $pathinfo = ""; $pathinfo .= $_[0]->path_info(); return Apache2::Const::DECLINED unless ($pathinfo =~ m/(\.txml$)|(\/$)/g) or ($pathinfo eq ""); $pathinfo =~ s/^\///; $pathinfo =~ s/\//::/; $pathinfo =~ s/.txml//; #rewrite the request string; $_[0]->args("Page=$pathinfo\&".$_[0]->args()) unless $pathinfo eq ""; #
First we retrieve the pathinfo string and test to make sure it contains a *.txml file at the end. We don't care for this example if there are any at the beginning so we will consider anything else to invalid for us to handle. If there is no .txml at the end then we decline to handle this request and let apache take over. by returning the Apache2::Const::DECLINED constant. If there was a *.txml file requested though then we want to capture that and modify it for use in the bricklayer app. I use a series of substitution expressions to rewrite the path string stripping off the beginning slash and changing all other slashes to "::" so that bricklayer can understand the template request. Once I have rewritten the string I prepend it to the argument list as a Page argument using the args() method. This method both retrieves and lets us set the argument string. One nifty thing about mod_perl environments is that you can freely choose to ignore whether or not you are processing a POST or GET request. This means you can post a form to a request string and still retrieve the request string variables. Something that would have been difficult to do in traditional CGI environment. It also means that when I translate this path request I don't have to worry about whether this was GET or POST request since my cgi wrapper can process both simultaneously. That's really all my handler needs to do now it just calls the bricklayer app and runs it with the new request string. We aren't finished yet though. The next article will cover how I created a CGI wrapper and rewrote CGI_Lite.pm to handle mod_perl requests. diff --git a/_content/Mod_Perl-20-2006-4-22.yml b/_content/Mod_Perl-20-2006-4-22.yml new file mode 100644 index 0000000..a7bc663 --- /dev/null +++ b/_content/Mod_Perl-20-2006-4-22.yml @@ -0,0 +1,6 @@ +title: Mod_Perl 2.0 +time: 2006-04-22 20:33:50 +section: Site-News +content-type: html +content: | +Here is my collection of Mod_Perl 2.0 tutorials and notes. diff --git a/_content/Moose::Role-Testing-2007-9-22.yml b/_content/Moose::Role-Testing-2007-9-22.yml new file mode 100644 index 0000000..dd6868d --- /dev/null +++ b/_content/Moose::Role-Testing-2007-9-22.yml @@ -0,0 +1,6 @@ +title: "Moose::Role Testing" +time: 2007-09-22 05:28:23 +section: Site-News +content-type: html +content: | +Currently there is no simple way to test Moose::Roles. Since they defer things like attribute adding and method wrapping you have to create a dummy class that uses them to test what they do. Usually this is through creating a package inline in the test module that does what you need or don't need based on what your testing. Therefore I'm considering either adding Role support to Test::Moose or creating a Test::Moose::MockObject module to make this easier. Still trying to decide which way to go. Maybe I'll go both ways :-) diff --git a/_content/More-Beryl-XGL-goodness:-Tabbing-2007-2-13.yml b/_content/More-Beryl-XGL-goodness:-Tabbing-2007-2-13.yml new file mode 100644 index 0000000..94ae985 --- /dev/null +++ b/_content/More-Beryl-XGL-goodness:-Tabbing-2007-2-13.yml @@ -0,0 +1,6 @@ +title: More Beryl/XGL goodness Tabbing +time: 2007-02-13 12:48:52 +section: Site-News +content-type: html +content: | +Beryl has a feature called Grouping and Tabbing. It's a way to organize windows on your desktop to save space. And as promised I have screenshots demonstrating the feature. This first one shows the tabbed group with the tab selection popup. BerylTabbing You can choose your animation when switching tabs just like anything else in Beryl. Here is a screenshot of the rotation animation when switching tabs. Switching Tabs in a Tabbed Group Stay tuned for more screenshots as I identify features that I think are worth highlighting. Next will be some transparency examples and window thumbnailing. diff --git a/_content/Moving-2006-6-3.yml b/_content/Moving-2006-6-3.yml new file mode 100644 index 0000000..68a6672 --- /dev/null +++ b/_content/Moving-2006-6-3.yml @@ -0,0 +1,6 @@ +title: Moving +time: 2006-06-03 18:57:21 +section: Uncategorized +content-type: html +content: | +Moving makes me a little on edge like a cat. I feel like I'm flying around like a fledgling lost in a sea of boxes. I'll go batty if we don't clear out our dining room soon. (That's where all the boxes are kept.) Confused? Ack, my blog has been hijacked! HA HA!!! don't worry folks it's just my wife playing a joke on me. :-) diff --git a/_content/New-Article:-Non-Discriminatory-2005-10-7.yml b/_content/New-Article:-Non-Discriminatory-2005-10-7.yml new file mode 100644 index 0000000..cd6e9ca --- /dev/null +++ b/_content/New-Article:-Non-Discriminatory-2005-10-7.yml @@ -0,0 +1,9 @@ +title: Non-Discriminatory Databases +time: 2005-10-07 00:35:57 +tags: + - Site-News + - Data + - Software-Development +content-type: html +content: | +Non-Discriminatory Databases An article about MetaData and Database Design. Mostly just a look inside my thoughts recently on Database design and on the fly expandability. diff --git a/_content/New-Link-Spam?-2005-9-8.yml b/_content/New-Link-Spam?-2005-9-8.yml new file mode 100644 index 0000000..51b88b5 --- /dev/null +++ b/_content/New-Link-Spam?-2005-9-8.yml @@ -0,0 +1,6 @@ +title: New Link Spam? +time: 2005-09-08 20:32:42 +section: Site-News +content-type: html +content: | +Has anyone noticed spurious referrer's showing up in your website logs? I have noticed strange referrer's showing up. It's looking like they were using bogus referrals to get their link on my log page to bump their google rating. I've now removed my logs link from the sidebar and added /logs/ to the robots.txt file so it won't benefit them anymore. But it was still annoying. Has anyone else noticed this kind of thing before? diff --git a/_content/Non-Discriminatory-Databases-2005-9-28.yml b/_content/Non-Discriminatory-Databases-2005-9-28.yml new file mode 100644 index 0000000..339e72a --- /dev/null +++ b/_content/Non-Discriminatory-Databases-2005-9-28.yml @@ -0,0 +1,6 @@ +title: Non-Discriminatory Databases +time: 2005-09-28 19:56:44 +section: Uncategorized +content-type: html +content: | +Most databases these days are RDBMS types. An RDBMS is all about identifying linkages between Pieces of Data. Usually using a unique identifier known as a primary key. They can often get to be really big with lots of interconnections in the various tables. They also have one important feature. They may tend to force you to make assumptions about the data. You are retrieving. A Table expects certain kinds of data in certain kinds of formats. If you need to add a kind of data you have to either create a new table with new linkages or you have to modify the table in question. Either one means more work for you. If your database is too discriminatory it might even be impossible without massive rewriting of your code. This is great if you want to limit and enforce data relationships. But what if you want freeform relationships. What if you want to be able to modify and change the way your data relates on the fly? What if you app needs non-discriminatory data types? An RDBMS may not be the way to go. A sufficiently complex Database with lots of data relationships can be impossible to modify at times. You might be faced with a choice, add a new set of tables and relationships (thus contributing to the growing complexity and the problem) or create a whole new database and export, then import your data into the new database with any translating you might need along the way. Metadata to the rescue!!! All pieces of information have metadata associated. Simply put, Metadata describes a piece of information. A checking accounts balance might have the following associated Metadata: It's an integer, It's currency, It's rounded off to two decimal points when displayed, It's referring to account 55341 at bank of somewheresville, you wish it were a higher number. All of this Metadata puts the number into context for you. In fact that is the other way of describing metadata, (The context your information inhabits.) For some applications, information context is the name of the game. Those applications can benefit from a Metadata based information design. Storing your music collection, for instance, is one such application. You choose what music to listen too based off of your mood. Maybe you want something soothing, so you choose soothing classical music. Soothing and classical are metadata about the song title. If you music database allows for on the fly metadata about the song titles then classifying and finding your music is potentially much easier. At it's most basic a Metadata based database design is extremely simple in structure. All that is required is two "tables". One "table" only needs two columns. An indentifier and a peice of information. The other one needs needs three columns: metadata type column, an information link, and the metadata content. MetaData Design Graph The real power of this is what it allows you to do in your code. Adding Metadata is dead simple. You can organize and sort you data in any fashion you want to. You can update, change, or expand your data description on the fly with changes to the database. A whole world of possibilities begins to expand before you. Not every application can benefit from that kind of flexibility though. Certain Applications need the strict control over relationships that a traditional RDBMS design gives. Accounting applications for instance rely on strictly defined relationships. But if your application can benefit from this then it's a huge boon to your development and design to use. diff --git a/_content/OK-so-I-was-a-little-over-the-to-2006-1-2.yml b/_content/OK-so-I-was-a-little-over-the-to-2006-1-2.yml new file mode 100644 index 0000000..5e1f298 --- /dev/null +++ b/_content/OK-so-I-was-a-little-over-the-to-2006-1-2.yml @@ -0,0 +1,7 @@ +title: OK so I was a little over the top with that last one +time: 2006-01-02 11:15:32 +tags: + - Site-News +content-type: html +content: | +You'll have to forgive me. IIS and it's quirks just caught me at a bad time. I guess I should retract my call for murder. It's not Microsoft's fault they designed it badly. Well ok maybe it is but that's no reason to kill the programmer.... Is It?? diff --git a/_content/OSS-Roundup-Series---I-trust-Ope-2005-10-26.yml b/_content/OSS-Roundup-Series---I-trust-Ope-2005-10-26.yml new file mode 100644 index 0000000..1b6a60d --- /dev/null +++ b/_content/OSS-Roundup-Series---I-trust-Ope-2005-10-26.yml @@ -0,0 +1,9 @@ +title: OSS Roundup Series - I trust Open Source +time: 2005-10-26 00:14:19 +tags: + - Site-News + - Open-Source + - OSS-Apps +content-type: html +content: | +I'm planning to write a series of articles on Open Source Tools and Applications that I feel are ready for prime time desktop useage. I'll write one article for each app and list their relative strengths and weaknesses. If any of you read my stuff for any period of time you'll know I tend to use a lot of Open Source Software. This is for several reasons:
  1. I can't afford to buy commercial stuff
  2. Open Source is one way I am a good steward
  3. I trust Open Source Software
I can't afford to buy commercial I mean really. I have 5 kids, I'm technically below the poverty level, and I need a working decent computer with software to do my job. How am I supposed to shell out 300+ dollars for an operating system, 399 dollars for a low end office suite, 109 dollars for the lowest end development environment, and we haven't even touched the ancillary stuff I'd need. Furthermore I'm supposed to shell that out every couple years? I'm sorry that just won't cut it. If I had to do that I'd have to get out this business altogether. Open Source is one way I am a good steward This relates to the above. I'm a christian so I believe God holds me accountable to how I spend my money. Hardware leaves me little choice. I don't buy top of the line but I do have to buy it. That comes out to a couple hundred here, a couple hundred there. Software is the only place I have an opportunity to cut spending. And I only have that opportunity because of Open Source. Where else can I get an entire operating system, plus a development environment, plus an office suite, plus a whole host of ancillary apps to help me do my job for the low low price of 80 dollars in a boxed set, or 0 if I have an internet broadband connection? Certainly not in Closed Source Software. I trust Open Source Software We've all seen some of the fear being spread around about open source software. It's volunteer driven, it must be lower quality. It's a free for all, anyone might put a security hole in there. You don't have any protection if stuff goes wrong. Who do you turn to when you need help. Let me just say. I've dealt with my share of Commercial Software companies. I've had to wait on the line for tech support. I've used the commercial offerings. They all had bugs. They all people with no clue taking tech support calls. In short it's no better on the Commercial Side than it is on the Open Source. It's a level playing field. At least, if your using open source, someone like me doesn't shudder when you ask if we can help fix your computer. We might even enjoy doing it. More and more software companies are putting out software that phones home. Right now it's optional. Soon it won't be. I have no control over what information is collected about me. So what? you might say. I don't have anything to hide. Well just think about the amount of spam you get. Now multiply that by a large number and imagine your spam filters trying to cope. Outlook Express? It's toast baby. It's not just about having something to hide. Can you trust that company to keep the information safe? Maybe the company has no nefarious plans for it but what about the employees? Or that clever hacker who just found a way in. Can that company keep your information safe from prying eyes? In my line of work I read every day about some major company that had somehow leaked personal information about it's customers. Lives were destroyed, Credit Ratings went down the tubes. I'm sorry but the less people who have copies of my Info the better. I can be sure that mainstream Open Source software doesn't phone home. Because if it did, Someone out there would have blown the whistle. I would have blown the whistle. On the whole us OSS types are pretty sharp about that stuff. So in short, I Trust Open Source. diff --git a/_content/Oh-my-goodness-he-just-updated-h-2009-4-5.yml b/_content/Oh-my-goodness-he-just-updated-h-2009-4-5.yml new file mode 100644 index 0000000..47de576 --- /dev/null +++ b/_content/Oh-my-goodness-he-just-updated-h-2009-4-5.yml @@ -0,0 +1,15 @@ +title: Oh my goodness he just updated his blog!! +time: 2009-04-05 20:29:18 +tags: + - Site-News + - javascript + - etap + - google + - javascript + - joose + - life + - updates + - whirlwind +content-type: html +content: | +Hi there! Honestly I'm not dead, I've just been all wrapped up in this whole life thing. You know... that whole, "Holy cow!!!! I work at Google now!! When did this happen exactly?" thing, where you're incredibly busy just trying to get up to speed on it all and catch your breath? Well anyway, I feel bad since I've been silent for so long, so here goes. An update from the trenches of the Life of Jeremy Wall. Since I wrote last I have
  • been "acquired by Google"
  • had my first "truly successful" Open Source Project
  • and actually got my Student Loan back under control.
I'm actually absurdly proud of that last one... And still a bit bewildered by the first one. So lets go down the list one at a time shall we?

"The Acquisition"

The company I worked for DoubleClick Performics got bought by Google. Who would have thought it? Somehow I landed an actual job at Google doing what I love. Crafting Code. I have to say, I think this has to have been a "God thing". I can't see any other way to explain it. Google of course is an awesome place to work. Free food, Smart people, Gameroom (strangely I almost never make into there though), Snacks, and really interesting technology to play with. I'm learning a lot about working in Highly Available, Highly Scalable environments. I keep waiting to wake up and find out it was all a dream.

"The Open Source Project"

My last post was about etap, my learn erlang project. I had no idea that the project would get the attention of Nick, a coder with EA, who was looking for a TAP compliant testing framework for their erlang code. He contacted me and asked if he could take over management of the code. I said" sure", as long as I still got to contribute when I had time. Before I knew it EA was using etap internally and I had what I consider to be my first truly successful open source project. etap has now gotten used commercially, traveled to conferences, and is soon to be featured in a book. Not bad for a learners project huh?

"The School Loan"

And perhaps most awesome of all I'm making headway on the whole credit repair thing. The school loan is back under control and no longer has a devestating impact on my credit report. This is an accomplishment that makes me really wonder if I'm in some kind of awesome dream or something. Life is really looking up around here. I'll try to get more regular on my posting again as I play with more erlang, think about writing a book, and Oh almost forgot to mention "Joose" the meta-object protocol for javascript. I'll write more about that later. diff --git a/_content/Old-Article-Back-up-2005-5-4.yml b/_content/Old-Article-Back-up-2005-5-4.yml new file mode 100644 index 0000000..fe3cf17 --- /dev/null +++ b/_content/Old-Article-Back-up-2005-5-4.yml @@ -0,0 +1,11 @@ +title: Old Article Back up +time: 2005-05-04 01:30:17 +tags: + - Site-News + - Data + - Perl + - Software-Development + - XML +content-type: html +content: | +I am slowly getting some of my old article's back up and online. You will start to see them in the links under pages on the sidebar. As I determine they are useful or helpful I will put them back up. The first is Perl and CGI part I I really should do part II of that one I suppose :-) diff --git a/_content/On-Inertia-2014-05-21.yml b/_content/On-Inertia-2014-05-21.yml new file mode 100755 index 0000000..5fe2b32 --- /dev/null +++ b/_content/On-Inertia-2014-05-21.yml @@ -0,0 +1,17 @@ +title: On Inertia +author: Jeremy Wall +time: 2014-05-21 +timeformat: 2006-01-02 +content-type: markdown +tags: + - site-news + - job-change +content: | +This is my last week at Google. +=============================== + +Friday will be my last day. Which feels very strange I must admit. I'm a high inertia person. I tend to stay the course once I've picked one which is probably why I've been at Google for 6 years now. But now that is all changing so this is as a good a time as any to look back. + +Google is without a doubt the best company I've worked for to date. It's a deeply engineering centric company which has many benefits. They invest heavily in engineering tooling and engineering culture. Everything from build tools to source control has had significant resources poured into them at Google and it shows. + +But by far the best feature of Google as a company has nothing to do with the engineering culture. Googles best feature is it's conscience. Without a doubt Google is the most moral company I have ever worked for. It's the kind of thing that is hard to verify from the outside but from the inside it's obvious that Google has a significant conscience in it's employees and it listens to that conscience and changes course as a result. diff --git a/_content/On-the-fly-Perl-Classes-with-Typ-2007-1-25.yml b/_content/On-the-fly-Perl-Classes-with-Typ-2007-1-25.yml new file mode 100644 index 0000000..d53acaa --- /dev/null +++ b/_content/On-the-fly-Perl-Classes-with-Typ-2007-1-25.yml @@ -0,0 +1,86 @@ +title: On the fly Perl Classes with Type restricted attributes +time: 2007-01-25 19:34:17 +tags: + - Site-News + - Perl + - Software-Development +content-type: html +content: | +There is a CPAN module Class::Struct that can give you this same functionality. But fool that I am I like to do things the hard way. Now the differences in the useage of this implementation, while it may not do things as automatically as Class Struct does, will allow you to create simple type restricted attributes on the fly in your code with a simple one line class method. You could even bundle this in with an AUTOLOAD function to build the attributes as you need them. Also the class attributes are added at runtime and with a little extra work you can even specify such things as typed arrays or hashes. Ok enough Pro's and Cons lets take a look at the code. First we take a look at our base class that does most of the work. +package Class::Builder; +sub new { + my $class = ref($_[0]) || $_[0]; + my $self = {}; + return bless($self, $class); +} +sub attribute { + my $self = $_[0]; + my $type = $_[1]; + my $attribute = $_[2]; + my $value = $_[3]; + if ($value) { + #handle INT case + if ($type eq "INT") { + if ($value =~ /^[0-9]+$/) { + $_[0]->{$attribute} = $_[3]; + return $_[0]->{$attribute}; + } else { + $_[0]->err("Not a $type value for $attribute"); + return undef; + } + } elsif ($type eq "SCALAR") { + #handle simple SCALAR case $_[0]->{$attribute} = $_[3]; + return $_[0]->{$attribute}; + } elsif (ref($value) eq $type) { + #handle other types + $_[0]->{$attribute} = $_[3]; + return $_[0]->{$attribute}; + } else { + $_[0]->err("Not a $type value for $attribute"); + return undef; + } + } else { + $_[0]->err("No value passed for $attribute in ".ref($_[0])); + } + return $_[0]->{$attribute}; +} +sub err { + $_[0]->{err} = $_[1] if $_[1]; + return $_[0]->{err}; +} +return 1; + Now lets see how we can use it. +package Document; +use Class::Builder; +use Document::Section; +use base qw(Class::Builder); +#Attribute Methods +# example of a SCALAR Typed Attribute implementation +sub Name { + return $_[0]->attribute('SCALAR', 'Name', $_[1]); +} +#Example of a ARRAY Typed Attribute with a further simple check +#that the array elements are of type: Document::Section +sub Sections { + my $arraytype = 'Document::Section'; + my $sections_old = $_[0]->Sections(); + my $sections = $_[0]->attribute('ARRAY', 'Sections', $_[1]); + foreach (@$_[0]) { + if (ref($_) ne $arraytype) { #throw an error here + $_[0]->err("Invalid Array Element $arraytype"); + $_[0]->attribute('ARRAY', 'Sections', $sections_old); + # reset the Sections array return undef; + } + } + # die "sections is a ".ref($sections); + return $sections; +} +# example of a HASH typed Attribute +sub Meta { + return $_[0]->attribute('HASH', 'Meta', $_[1]); +} +# example of an INT typed Attribute; +sub Cursor { + return $_[0]->attribute('INT', 'Cursor', $_[1]); +} + I'm not finished modifying this concept so I may post some additional enhancments later. But you can get the idea now. diff --git a/_content/Open-Letter-to-Sony-Music-2005-11-1.yml b/_content/Open-Letter-to-Sony-Music-2005-11-1.yml new file mode 100644 index 0000000..9d9af8b --- /dev/null +++ b/_content/Open-Letter-to-Sony-Music-2005-11-1.yml @@ -0,0 +1,7 @@ +title: Open Letter to Sony Music +time: 2005-11-01 22:35:41 +tags: + - Site-News +content-type: html +content: | +I recently wrote to Sony Music regarding their controversial DRM Technology. Below is the text of the message. Some of you may have heard of the "RootKit" controversy surrounding Sony's DRM protectes Music CD's. I recently wrote about how I trusted Open Source Technology. This is one more example of why that is. You don't always know what a Closed Source company is doing with their Software and the consequences can be disasterous. Here is the text of my message to Sony Music:
To Whom It May Concern I have recently been made aware of some disturbing facts regarding your DRM Technology for music. While I appreciate your desire to protect your investment in music labels and artists I must strongly disagree with your decision to unknowingly install "rootkit" based technology on computers that have these CD's inserted into them. I work for a company that manages networks totalling over 7000+ computers. I am now forced to advise our customers that putting sony music cd's into machines on their networks is strongly discouraged as a matter of policy. I can't take the chance that the number of security holes your DRM Software introduces will help to take down one of our networks. I regret to inform you of this but I hope that you will give it due consideration and consider altering your policy and paying closer attention to the ramifications of the DRM technology you employ. Jeremy Wall Quality Network Solutions jwall@qnsk12.com
Addendum: For those of you who don't know a rootkit is an application that embeds itself into your operating system at a very deep level and allows the creator to control your computer without your knowledge. It is often employed by a hackers to remain undetected once they are on your system. Sony's use of the technology is highly irresponsible. diff --git a/_content/OpenOfficeorg-||-A-real-competi-2005-11-7.yml b/_content/OpenOfficeorg-||-A-real-competi-2005-11-7.yml new file mode 100644 index 0000000..8d4927f --- /dev/null +++ b/_content/OpenOfficeorg-||-A-real-competi-2005-11-7.yml @@ -0,0 +1,9 @@ +title: OpenOffice.org || A real competitor to Office? +time: 2005-11-07 06:59:30 +tags: + - Open-Source + - OSS-Apps + - Reviews +content-type: html +content: | +OpenOffice.org Is it a real competitor to Office? Well now that depends on how you define competitor. Lots of people define competitors to MS Office by the level of interoperability with Office. This has the effect of ruling out pretty much every non microsoft product out there. I'm going to take a slightly different look at the picture. The real question here is can OOo (eg. OpenOffice.org) do everything you need your office suite to do. This is a real world look at what you really need and whether OOo does it and does it well. So lets get on with the review shall we? The list of things your office suite needs to do can be summed up fairly easily. There is the basic functionality that the average home user or small business requires. Then there are the advanced features that the power users and task automaters want. And finally there are the enterprise class features that large organizations want. Here is my semi-detailed list.
  • Basic Functionality
    • Write and Format Text Documents
    • Create spreadsheets to track and analyze numbers
    • Create Presentations
    • Create and embed Diagrams and Illustrations
  • Advanced Features
    • Connect to Databases and use their data
    • Automate Standard and repeatable tasks
    • Creation and Use of Templates for common document look feel
    • Extend the functionality of the app with scripting and or plugins
  • Enterprise Class Features
    • Share and Publish Data
    • Enforce Document Standards
I hopethis will be a useful review that helps you determine if you can use OOo on your desktop at home or at work. OpenOffice.org Writer When you think about it word processing hasn't really had or needed much innovation since the first WYSIWIG editor came out. WordPerfect 5.1 pretty much had all the important formatting features all sewn up along with a lot of the power user features as well. In fact Word 97 was pretty much the top of the word processing evolutionary chain. As for Writer, it has all the elements needed to edit and create documents. You can format them just as easily as Word. You can integrate data from other sources. You can create complex documents and Layouts. So does Word, WordPerfect, and just about every other Word Processing app out there though too. How does a Word Processing program stand out in such a market? While Writer may not have anything earth shattering to offer it does have some pretty nice features to make the task of editing easier. Particularly when working on large complex documents. All the standard word processing features are there including spell checking, multiple fonts and sizes, positioning, lists, and tables. However, Writer does offer something not present in most of the other word processing applications, the Stylist and the Navigator. No doubt inspired by the XML underpinnings of the OpenDocument format, these two features help Writer stand out from the crowd. The Stylist makes using and keeping track of styles easier than ever. It harnesses the power and flexibility of CSS and brings it to the Word Processor. The concept is simple. You can create a library of styles for you document and edit them from a central location. All your headers, all your lists, even your paragraphs can be modified at once. If you change your text-body style in the stylist then every place in your document that uses the text-body style will change at the same time. You can edit them and keep track of them, all from one location. You can even use them again in later documents. This time saving feature comes in very handy and after a while you will wonder how you ever got along without it. The Navigator makes finding that paragraph you need to edit a little easier. It shows you the structure of your document. Need to find that section on grandma's pumpkin pie recipe? Find the subheading in the navigator. Need to find that section on the employee dress code in the company handbook? Look in the Navigator. Where is that chart of the companies quarterly earnings in your report? Yep you guessed it, look in the navigator. In fact, Writer has everything you need to write quality Documents for Home or Business. As far as sharing those documents with others, Writer has a number of options for you. You can export to PDF if preserving the formatting perfectly is your first concern and modifying it is of no concern. If you need them to be able to edit the documents that won't work quite so well though. For that purpose you can export and save the documents to most other common formats. Or you can choose door number three. Offer them OpenOffice to edit and save the documents themselves. It's free and you have every right to distribute it yourself. Burn them a CD and they can modify your document all they want. Trust me, Not using MS Office won't be a deal killer. Not if you provide a quality product and quality service. And for the Home User, there is absolutely no reason Writer won't work for you. OpenOffice.org Calc Calc is the spreadsheet element in the OOo suite. It gives you some serious number cruching power. All the standard functions to sum, analyze, and otherwise manipulate your numerical data is here. It even has everything you need to organize and display that data in all the standard and not so standard charts. Pie charts, bar charts, line charts, and even snazzy 3d charts are all here. The standard stuff all works exactly like spreadsheets have worked since Lotus 1-2-3 was on your local accountants computer. So just how does Calc stand out? OOo is all about sharing information. It's open source, so sharing information is no threat to it's business model. The DataSources panel puts it all at your fingertips. And the Datapilot Wizard walks you through it. Chances are, no matter what database you data is stored in (or you intend to store it in), OOo has a way to connect to it...(out of the box) Oracle, Mysql, SqlServer, Access - OOo has inbuilt connectors for all of them ready to go. Few other spreadsheet applications have this degree of connectibility with this price tag. And it works out of the box. Now, no discussion of spreadsheet functionality, is complete without including the subject of macro's. I've known people who turned macro's into an artform. People who used them to automate so much of their job that they could do the work of 10 people without breaking a sweat. And Calc rises to the challenge. There is nothing Open Source programmers like better than being able to modify the software they use. And OpenOffice Basic puts that power at your fingertips. Whether you just like to use powerful spreadsheet functions or build full blown apps OpenOffice has the macro tools you need to get the job done. If you willing to put in the time to learn you can do anything with them. You won't find you are missing any functionality in this area. Documentation on the other hand can be a little difficult to understand. I and others plan to help with this, so I'm sure it won't be long before that problem is remedied. Will Calc fit your needs in a spreadsheet application? The answer is yes. It has all the functionality you require to process and analyze your data. As for sharing that data? You can of course export Calc documents to PDF should you care to, and save them to the standard formats. A better question to ask though is do you really want to share your data in spreadsheet form? Databases and reports are a much better solution than a spreadsheet. Very few people indeed need to share editable spreadsheets outside of their company or group. And if you do you can always choose door number three again. Nothing stops you from making OOo available to them for their own use. Again, not having Excel, is probably not going to be a deal breaker in your dealings with other people and companies. That, quality product and service aspect, is of much more importance. OpenOffice.org Base A database application was the one thing missing from previous releases of OpenOffice. OOo always had database integration from the ground up with the data-sources panel. And you could use that to generate reports and analyze database data. But no native database interface application was provided. Competing with the hugely popular Access in MS Office OpenOffice came up looking a bit weak. Enter Base, the OpenOffice database application. Base is much more than just a native Database application. It also is a fully functional portal to any database you want to use. you can generate reports, create forms, and edit modify or delete the information for any database you have. Base doesn't care what database you use it just gives you a friendly interface to the data. As far as what kind of functionality that interface provides you, it's pretty much all there. The best part of OOo's database integration is how pervasive it is. It's perfectly possible to do a database report in Writer with all the formatting power that gives you. You can process the data in Calc first, also with all the power that gives you. Base gives you everything you really need when dealing with your databases and the data in them. I don't think anyone will find they are missing functionality here. OpenOffice.org Impress Presentation Software, the app with a niche audience that everyone thinks they need. Personally I think presentation software is overused and misused perhaps 95% of the time. But for that other 5% it's a very handy tool. If your one of those people who use presentation software and actually need it, then Impress has all the tools you might need. It even has some features you might find very handy. That stylist I was talking about makes an appearance here, as well. It's just as handy in Impress as it is in writer. Impress has all the animations, transitions, effect, and formatting options you could possibly need. The same Data integration abilities available elsewhere are here too. You have drawing tools, and you can embed charts, tables, and other elements in your presentations. Presentation software at it's core is like word processing software. It just has to do a few things and there really isn't much innovating left to do. Impress fits your needs just fine in that regard. OpenOffice.org The Suite As a whole the OpenOffice Suite has every feature you might need in an office suite. Whether you need enterprise integration, or just a simple office suite for home use or somewhere in between OpenOffice can scale to your needs. And you certainly can't beat the price. diff --git a/_content/Pages-2009-4-5.yml b/_content/Pages-2009-4-5.yml new file mode 100644 index 0000000..64c4710 --- /dev/null +++ b/_content/Pages-2009-4-5.yml @@ -0,0 +1,7 @@ +title: Pages +time: 2009-04-05 19:32:16 +tags: + - Site-News +content-type: html +content: | +foo diff --git a/_content/Perl-Tip---Chained-encodings-and-2006-10-7.yml b/_content/Perl-Tip---Chained-encodings-and-2006-10-7.yml new file mode 100644 index 0000000..fb96476 --- /dev/null +++ b/_content/Perl-Tip---Chained-encodings-and-2006-10-7.yml @@ -0,0 +1,8 @@ +title: Perl Tip - Chained encodings and binmode magic +time: 2006-10-07 14:42:04 +tags: + - Perl + - Software-Development +content-type: html +content: | +OK how many of you have gotten those Wide Character in Print warnings while dealing with unicode text? especially UTF-16 files which don't get handled on the fly in perl. I finally figured out how to get rid of them thanks to, of all places, a MSDN blog. The gist of the post is a technique where you chain encodings together when changing the encoding used on a file handle. He used it on an open but for my purposes I wanted to change STDOUT's encoding not an opened file handle. So heres the magic line: binmode(STDOUT, ":raw:encoding(UTF-16):utf8"); Now your first thought is why couldn't you just use the encoding you want? Well here's why. First of all the utf8 encoding on the righmost tells perl that it is receiving it's default utf8 encoding. Then the encoding(UTF-16) in the middle performs the encoding conversion and finally the raw on the left tells perl to spit it out whithout changing. The three together result in a warningless conversion from utf8 to utf16 with no line feed conversion. I didn't even know you could chain these together until now but I'm going to remember this trick for the future, that's for sure. To break it all down for you. The chain is processed from right to left. Starting with utf8 got rid of my wide character warning. chaining that into the encoding(UTF-16) performed my conversion and chaining that into :raw made sure I got text and not octet encoded characters. diff --git a/_content/Perl-and-CGI-part-I-2005-5-4.yml b/_content/Perl-and-CGI-part-I-2005-5-4.yml new file mode 100644 index 0000000..1306815 --- /dev/null +++ b/_content/Perl-and-CGI-part-I-2005-5-4.yml @@ -0,0 +1,7 @@ +title: Perl and CGI part I +time: 2005-05-04 00:47:13 +tags: + - Uncategorized +content-type: html +content: | +Perl and CGI part I Cookies, Query Strings, and Post variables.... OH My!!!! If you're just starting out in perl and trying to figure out how to handle all that cgi stuff you have a number of options. You can use a CPAN module (CGI, CGI-Lite, FCGI...) or you can use a do it yourself solution. The CPAN Modules have the benefit of handling everything for you with easy(supposedly) to use methods. On the other hand if you don't need all the features of a CPAN Module a do it yourself solution may be less confusing and have a smaller codebase. In this article I will walk you through making a custom module for your CGI to handle the basics of a CGI application. This article assumes familiarity with basic Perl Syntax and Modules. All those Cookies, Query Strings and Post Variables are made possible in the HTTP world by something called HTTP headers. The HTTP header tells the application (browser, Server, rss reader.....) making the page request useful information for displaying the page. Some of the Header is built automatically by the HTTP server(apache, IIS....) or application making a request. However, if you want to include cookies, or retrieve query strings, post variables and cookies values on the server side, you have to be able to retrieve the values from those headers. Now in my case I needed to be able to set and read cookies and retrieve Query strings and Post variables. I didn't need to be able to write out html automatically or output anything else other than that header. Rather than try to figure out how to use the CPAN modules for just those tasks and nothing else I decided to stretch myself a little and write my own. I like learning new things and it turned out not to be any more difficult than wading through the documentation for the CPAN modules would have been. The first thing to remember when woking on a CGI Module is the last line of the HTTP Header. The last line you say? Why do we start on the last line? The reason is that any header regardless of whether it has anything else in it must have this last line. Additionally this line must be last because, when the browser sees it, it stops processing the header and assumes everything after it is part of the page itself. What is this magical line: "Content-type: text/html\n\n" This line tells the browser application what mime type the page is. In this case it is html text. I could just as easily have set it to text/xml, text/rss, or any other mime type I cared to, including my own custom mime types. For more information about mime types you can look here. For our custom module we are are going to use an object to represent the header. What we want, is to be able to add cookies to the header. We also want to leave it open to possibly adding other things to the header later. When our header is created we will retrieve the string by using a method. The first thing our Header object needs is a constructor method, which we will call new_header, in the interest of readable code.
package pk_cgi; require Exporter; use strict; our @ISA = qw(Exporter); our @EXPORT = qw(cgi_request new_header get_cookie_list get_cookie); sub new_header { my $proto = shift; my $class = ref($proto) || $proto; my $header = "Content-type: text/html\n\n"; return bless(\$header, $class); }
What we did here is create a string with that all important last line in the header. This header is actually completely viable now. we could output it and it would be perfectly acceptable to the requesting application. It does not however have any cookies defined in it. Those will be added later if we should want them. We will use an object method for those. As you can see the object creation is almost absurdly simple. Just a string blessed into the object. Should you wish, you could also add arguments to set the mime type to something else. Now, what about those cookies? How do we handle those? Cookies are handled by lines in the header like this one "Set-Cookie: cookiename=cookievalue\n". That is a basic cookie header. you could also add some optional parameters: "Set-Cookie: cookiename=cookievalue; path=value; expires=value; domain=value\n" Our method needs to build the Set-Cookie line based on parameters and then prepend the line onto our header object. That prepend is very important, remember. because the line already in the header has to stay last. We will use a hash to pass the cookie's name and value pair into the method. If we have any of the optional parameters to set we can store those in the hash also.
sub add_cookie { my $self = shift; my $Cookie = shift; my $String = "Set-Cookie: " . $$Cookie{name} . "="; $String .= "$$Cookie{value}"; if (exists ($$Cookie{path})) { ## set cookies path $String .= "; path="; $String .= $$Cookie{path}; } if (exists ($$Cookie{expires})) { ## set cookies expiration $String .= "; expires="; $String .= $$Cookie{expires}; } if (exists ($$Cookie{domain})) { ## set cookies domain $String .= "; domain="; $String .= $$Cookie{expires}; } $$self = $String . "\n" . $$self; }
Again the method is absurdly simple. using the values from the hash to build the header line and prepending it to the header object's string. what if we want multiple values in our cookies though? We could just set a whole bunch of cookies for each name=value pair we needed, but for some applications this would quickly get unwieldy to use. What we need is multivalue cookies. Happily such a thing is possible. We just have to work out a way to separate a cookie's value string into sub name=value pairs. To do this we need a separator for each pair and one to separate each name from the value. These separators are arbitrary, but you probably want to use something that isn't likely to occur in your names or values. Also the ; and = is not a good idea since it is used elswhere in the HTTP header as a separator. I chose the ":" and the "," to act as separators. The Colon separates names from values and the Comma separates name:value pairs. Once we have our separators we need to change our object method so it can recognize if we are setting multivalue or single value cookies and act accordingly. We still use the hash to pass the values, but this time if its a multivalue cookie the value key of the hash stores a reference to another hash holding the name=value pairs for our multivalue cookie. We need to test for the presence of this hash and generate our Set-Cookie line accordingly. Here is our new object method.
sub add_cookie { my $self = shift; my $Cookie = shift; my $String = "Set-Cookie: " . $$Cookie{name} . "="; if (ref($$Cookie{value}) eq "HASH") { #print "its a hash"; my $CookieValue = $$Cookie{value}; foreach my $key (sort(keys(%$CookieValue))) { $String = $String . $key . ":" . $$CookieValue{$key} . ","; } #print $String; } else { #print "its not a hash"; #print $$Cookie{value}; $String .= "$$Cookie{name}:$$Cookie{value}, "; } if (exists ($$Cookie{path})) { ## set cookies path $String .= "; path="; $String .= $$Cookie{path}; } if (exists ($$Cookie{expires})) { ## set cookies expiration $String .= "; expires="; $String .= $$Cookie{expires}; } if (exists ($$Cookie{domain})) { ## set cookies domain $String .= "; domain="; $String .= $$Cookie{expires}; } $$self = $String . "\n" . $$self; #print $$self; }
We changed several things in order to facilitate retrieval of our cookies later with the new multivalue cookie format. Since, while retrieving our cookies, we don't know whether the cookie is a multivalue cookie or not storing the single value cookie in the same format will make it easier on us during retrieval. The new method checks for a hash reference in $$Cookie{value} if there is a hash reference then it stores the multiple values in that hash. If there isn't a hash then it stores the single value in the same format. We can now handle setting cookies in our CGI module. All we have left is retrieving the cookies. There are two ways we might want to retrieve the cookies we've set in our application: Retrieving a list of all the cookies sent, or retrieving a cookie by name. Retrieving by name is probably the most useful of the two so lets start with that one. First we need to pull the list of cookies out of the header. then we need to locate the cookie we want to find and finally we need to return that cookies value or list of name=value pairs. Since we had to foresight to store single values in the same format as multivalues we made it a little easier on ourselves. We can treat single value cookies the same as multivalue cookies. We will pass the cookie id in as a string. When a browser application sends cookies to the server they get stored in perl's %ENV hash under the HTTP_COOKIE key. If the key doesn't exist then there were no cookies. So lets get started on that method. We pull the list of cookies out of the header like so:
sub get_cookie { my $CookieId = shift; my %CookieVars; if (exists $ENV{'HTTP_COOKIE'}) { my @buffer = split(/;/,$ENV{'HTTP_COOKIE'}); } else { $CookieVars{Status} = 0; return 0; } }
A return value of 0 means no cookies were found. The cookies are stored in a buffer array after being split on the ";". The next thing we need to do is extract the name value pairs of each of the cookies and return the cookie we are looking for.
foreach my $i (@buffer) { #print $i; (my $Name, my $Value) = split(/=/,$i); if ($CookieId eq $Name) { my @buffer2 = split(/,/, $Value); foreach my $y (@buffer2) { (my $CVar, my $CVal) = split(/:/, $y); $CookieVars{$CVar} = $CVal; #print "$CVar = $CVal"; } $CookieVars{Status} = 1; return %CookieVars; } }
After storing the cookies in the buffer we step through the buffer and split the cookie on the equal sign storing the name and the value. Then using an if statement we test to see if the name is the same as the cookie ID. When we locate the cookie we want we store its name:value pairs in a hash and and return said hash. It doesn't matter whether the cookie had multiple values or not it still returns a hash. Lastly if we couldn't find the cookie we were looking for we set the status field to 0 in the hash and return the hash. When we retrieve a cookie we can check this status field for a value of 0 to see if the cookie existed. A value of 1 means the cookie did exist. Here is the complete method:
sub get_cookie { my $CookieId = shift; my %CookieVars; if (exists $ENV{'HTTP_COOKIE'}) { my @buffer = split(/;/,$ENV{'HTTP_COOKIE'}); foreach my $i (@buffer) { #print $i; (my $Name, my $Value) = split(/=/,$i); if ($CookieId eq $Name) { my @buffer2 = split(/,/, $Value); foreach my $y (@buffer2) { (my $CVar, my $CVal) = split(/:/, $y); $CookieVars{$CVar} = $CVal; #print "$CVar = $CVal"; } $CookieVars{Status} = 1; return %CookieVars; } } } else { $CookieVars{Status} = 0; return %CookieVars; } }
Retrieving a list of cookies is actually much easier to do. Here is the code for the method, I'll leave it as an exercise for the reader to interpret it.
sub get_cookie_list { my @buffer = split(/;/,$ENV{'HTTP_COOKIE'}); my %cookies; foreach my $i (@buffer) { (my $Name, my $Value) = split(/=/,$i); my @CookieValues = split(/,/, $Value); my %CookieVars; foreach my $j (@CookieValues) { (my $CookieVariable, my $CookieValue) = split(/:/, $j); $CookieVars{$CookieVariable} = $CookieValue; } $cookies{$Name} = \%CookieVars; } return %cookies; }
Using the module is also an easy matter. Simply create a new header object using the constructor method. Add your cookies using the appropriate object methods and output the header before printing any thing else on your page. A simple script using it is shown below.
use pk_cgi; my $header = pk_cgi->new_header; $new_cookie = {name => "name", value => "some value"}; $header->add_cookie($new_cookie); print $header->get_header; %cookie = pk_cgi::get_cookie("name"); print "< ?xml version='1.0' encoding='UTF-8'?> < !DOCTYPE html PUBLIC '-//W3C//DTD XHTML 1.0 Transitional//EN' 'http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd'> "; print " ";
In a later article I will address retrieving get and post variables from forms. Additional Reading: * http://www.perldoc.com/ * http://www.cgi101.com/class/ * http://www.ltsw.se/knbase/internet/mime.htp You may also be interested in these tutorials and articles: Mod_Perl 2.0 how-to's diff --git a/_content/RAP-with-me-now-2005-9-8.yml b/_content/RAP-with-me-now-2005-9-8.yml new file mode 100644 index 0000000..241cdda --- /dev/null +++ b/_content/RAP-with-me-now-2005-9-8.yml @@ -0,0 +1,9 @@ +title: RAP with me now... +time: 2005-09-08 02:19:36 +tags: + - APIs + - Software-Development + - User-Interface +content-type: html +content: | +Rapid Application Prototyping, or RAP(ing) as I will calling it is a fantastic way to be sure you meet your design goals for a project. Furthermore, with AppKit (my own personal Web Application development Framework), it is greatly simplified through the use of an "advanced" plugin and templating engine. How so, you ask? Well I'll tell you... Application Logic vs UI Flow What's the difference in these two things? Application Logic is all about how your application handles user input and data. UI Flow is all about how the User sees and inputs data. When the two are seperated you can work on each without disturbing the other. This allows you to, for instance, quickly prototype your UI screens and workflow without worrying about how that application logic works behind the scenes. That way you can get valuable feedback from customers and assistance in your requirements gathering process. Templating: (develop that unique look before you do the behind the scenes stuff) When I first got started in this web development thing I didn't know there was such a thing as templating. I developed logic right alongside my UI. In fact in a lot of ways my UI was driven by my application logic. That meant changing something required recoding and reworking my apps logic. This, while challenging and fulfilling, wasn't a particularly useful way of going about things. It was, however, fashionable at the time and everyone was doing it. Nowadays I've grown up and use a much more efficient system. I build my UI seperately using a templating engine. This lets me attach logic to it later (I can detach logic too, or even rework logicall without once touching the template) . I can change the template (rework or even drop the template all completely) all without once touching the logic or even having any logic behind it. In essence I can create a mockup of the programs UI flow demonstrate it, tweak it, test it and then attach the backend. RAP is definitely the way to go. Plugin Architectures (add a piece here add a piece there) So you can create your UI without once touching the logic. All well and good you say, but what then? Ahhh, that is the beauty of it. Once you have your UI in place start attaching actions to the UI. Then develop the logic that handles that action. If you framework has plugin functionality then you can do that piece by piece. AppKit dynamically loads the plugin you need to do the action you requested. If no plugin fits the action it will tell you so. Need an action? Develop an interface for it. Think of plugins as the hooks for your UI into the Application. And all you have to do is drop them in one at a time or by the wheelbarrow full if you want. Complete separation of logic and program flow/UI. It's a beautiful thing trust me. diff --git a/_content/Read-this-article-2006-4-12.yml b/_content/Read-this-article-2006-4-12.yml new file mode 100644 index 0000000..451ab21 --- /dev/null +++ b/_content/Read-this-article-2006-4-12.yml @@ -0,0 +1,6 @@ +title: Read this article +time: 2006-04-12 18:52:10 +section: Site-News +content-type: html +content: | +http://www.opinionjournal.com/extra/?id=110008220 trust me it'll open your eyes on a few things. and yes I know this has absolutely nothing to do with code or even computers but I though it was important. diff --git a/_content/Really-liking-the-new-kde4-2009-6-27.yml b/_content/Really-liking-the-new-kde4-2009-6-27.yml new file mode 100644 index 0000000..bbbfd96 --- /dev/null +++ b/_content/Really-liking-the-new-kde4-2009-6-27.yml @@ -0,0 +1,10 @@ +title: Really liking the new kde4 +time: 2009-06-27 13:13:20 +tags: + - Site-News + - desktop-enviroment + - gnome + - kde +content-type: html +content: | +I just recently upgraded my home desktop. It was nearing 10 years in age and desperately whispering to me in my sleep that it wasn't going to last much longer. The new machine is a Quad core AMD phenom with A Gigabyte board and 8 gig of ram. This of course necessitated a new OS to go along with the shiny new hardware, so along comes Ubuntu Jaunty with its 64-bit joy and the shiny new KDE4 desktop. I had previously switched from kde to gnome despite being a kde fan for years because gnome had just begun to feel more cohesive. I still disliked the lack of configurability as compared to gnome but overall gnome felt better. But with the new kde4 I've come back. Plasma and the kde desktop have really upped the game. The whole experience has vastly improved and kde no longer feels like it's lagging behind but has leaped ahead of the competition. If you haven't checked it out yet you should give it a try. diff --git a/_content/Response-to-Matt-on-PHP-2007-7-14.yml b/_content/Response-to-Matt-on-PHP-2007-7-14.yml new file mode 100644 index 0000000..7480322 --- /dev/null +++ b/_content/Response-to-Matt-on-PHP-2007-7-14.yml @@ -0,0 +1,6 @@ +title: Response to Matt on PHP +time: 2007-07-14 19:35:19 +section: Site-News +content-type: html +content: | +I haven't seriously looked at PHP in years having long ago joined the Perl camp. I know I know shame on me. However from that very perspective it has seemed to me that php's biggest problem is the addons. Yes Pear and some of the other frameworks alleviate it somewhat but really PHP looks and acts like a thrown together language. Just look at the function list in the documentation. The various libraries it uses are binary addons . Their is little consistency in useage, naming conventions and almost no namespacing used in it. I'll quote Jeroen van der Meer:
PHP is basically a collection of extensions which are all put together to form what we have now. However, these extensions change and so does the collection.
This is almost completely different from the way every other language does it. You might be able to make the case that Java is like this too but even java uses namespacing and most of the libraries are written in java itself. In a way PHP's worst problem has been it's community of users and the language maintainers enablement of that community. The perl community may seem a bit gruff to a php user but getting a module featured in CPAN takes a little bit more work than it seems getting an addon into the official php distribution took. This actually fosters the bad coding practice most perl and ruby users associate with php coders. PHP may finally be growing up but it's like taking a rebellious child who hasn't been disciplined in years and sending him to bootcamp. He'll have to be dragged kicking and screaming obscenities but hopefully the result will be worth it. diff --git a/_content/Reuseable-AJAX-gateways-2005-11-28.yml b/_content/Reuseable-AJAX-gateways-2005-11-28.yml new file mode 100644 index 0000000..46779ea --- /dev/null +++ b/_content/Reuseable-AJAX-gateways-2005-11-28.yml @@ -0,0 +1,85 @@ +title: Reuseable AJAX gateways +time: 2005-11-28 16:13:20 +tags: + - Site-News + - APIs + - javascript + - Software-Development + - XML +content-type: html +content: | +Everyone knows about AJAX these days. You just about can't go anywhere on the net whithout hearing about it. And if you're a coder who want's to know more than just what library you should download to start using it you've probably done a little googling and came up with this site: XMLHttpRequest Objects [developer.apple.com] You even played around with the examples and made a few demo apps then realized. Hey!! How can I make these things reusable without ugly global variables and functions that check to see if the response came back yet? In short: how do I use this in a real app? Apple has done a really good job of showing how the xmlhttprequest object works. They even do a good job of showing some useful ways to use it. But if you're like me you want to go a bit farther. I like reusability. I also don't like using Global variables as a gatekeeper. So lets take a look at how we can make this code a little more reusable. The first thing to do is come up with a way to use multiple different functions as the handler for that onreadystate property. Using the same handler really cramps our style. Additionally having to write all that code to test our object's state is a real drag. It would be nice if we could avoid having to write that for every single function we use as a handler. Here is the solution: Let's start with this function here: +function loadXMLDoc(url) { + req = false; + // branch for native XMLHttpRequest object + if(window.XMLHttpRequest) { + try { + req = new XMLHttpRequest(); + } catch(e) { + req = false; + } + // branch for IE/Windows ActiveX version + } else if(window.ActiveXObject) { + try { + req = new ActiveXObject("Msxml2.XMLHTTP"); + } catch(e) { + try { + req = new ActiveXObject("Microsoft.XMLHTTP"); + } catch(e) { + req = false; + } + } + } + if(req) { + req.onreadystatechange = processReqChange; + req.open("GET", url, true); + req.send(""); + } +} + Now for this to do what we really need it to we need a couple of different things. That processReqChange function needs to be able to change dynamically. So lets add another function argument that will hold a function passed in to be used here. Like so: loadXMLDoc(url, func) then you can change req.onreadystatechange = processReqChange; to req.onreadystatechange = func; This will allow us to pass any function we want as the state change handler. Don't go deleting that processReqChange function yet though. We still need it. In fact lets take a look at that one right now shall we? +function processReqChange() { + // only if req shows "loaded" + if (req.readyState == 4) { + // only if "OK" + if (req.status == 200) { + // ...processing statements go here... + } else { + alert("There was a problem retrieving the XML data:\n" + req.statusText); + } + } +} + We need this to keep checking our state and tell us when our response came back. We also need it to use any xmlhttprequest object we want it to. What we don't need it to do is retrieve our response for us. In short we need it to recieve a request object in it's arguments and return a response saying it's ok to process our response. So lets modify it a little shall we? +function processReqChange(req) { + // only if req shows "loaded" + if (req.readyState == 4) { + // only if "OK" + if (req.status == 200) { + return 1; + // it's safe now go ahead + } else { + alert("There was a problem retrieving the XML data:\n" + req.statusText); + } + } + return 0; + //it's not safe yet +} + now when we pass this function a request object it returns 1 when we have our response and 0 when the response is not ready yet. Both of these functions are now reusable. But how exactly do we start using them? I thought you would never ask. lets build an example: +function append_to_id(el, contents) { + var element = document.getElementById(el); + //alert('appending: ' + contents.nodeValue ); + element.appendChild(contents); +} +function append(url, el) { + //alert('starting append operation'); + var func = function() { + if (processReqChange(req)) { + var ajax_return = req.responseXML; + while (ajax_return.hasChildNodes()) { + append_to_id(el, ajax_return.firstChild); + ajax_return.removeChild(ajax_return.firstChild); + } + } + } + var req= loadXMLDoc(url, func); +} + In the append function we create a dynamic function that we can pass to our loadXMLDoc function. That dynamic function contains the meat of what we are wanting to do. It uses an if statement that checks our processReqChange function for a valid return. When it gets a valid return the if statement processes our request. It couldn't be any eaiser. you can see full example code here: Example Script diff --git a/_content/SOAP-or-the-lack-thereof-2006-10-27.yml b/_content/SOAP-or-the-lack-thereof-2006-10-27.yml new file mode 100644 index 0000000..8035b78 --- /dev/null +++ b/_content/SOAP-or-the-lack-thereof-2006-10-27.yml @@ -0,0 +1,6 @@ +title: SOAP or the lack thereof +time: 2006-10-27 14:50:44 +tags: Site-News +content-type: html +content: | +My first professional experience with pulling data via a Soap Service has proven to be very disappointing. I am fairly sure that SOAP services work very well as long as the company providing them knows what they are doing. This company does not appear to know what they are doing. So for future reference when providing a SOAP Service to someone please include in your description of how to hit it the following items: Namespace URL complete Object description for the call Then don't change these under any circumstances. Don't assume that the people using your Service are going to be using .NET on the client side. The whole point of SOAP is cross platform RPC calls. If your system won't work out of the box with Java or Perl or python clients then you didn't set it up right. Anyway I'm done ranting now. Maybe this next time they will get it right. diff --git a/_content/SQL-Stupidity-part-II-2005-9-7.yml b/_content/SQL-Stupidity-part-II-2005-9-7.yml new file mode 100644 index 0000000..44ee79c --- /dev/null +++ b/_content/SQL-Stupidity-part-II-2005-9-7.yml @@ -0,0 +1,9 @@ +title: SQL Stupidity part II +time: 2005-09-07 21:31:35 +tags: + - ANSI-SQL + - Data + - Software-Development +content-type: html +content: | +I am working on a legacy web application right now that is giving me fits. I'd say about 90% or so of the application is done in SQL. Yes you got that right. The business logic is almost completely written in a huge number of stored procedures, sql functions, and scheduled database tasks. This makes tracking down the parts of your app you are trying to work on very difficult. Every time I turn around there is another stored procedure, function, or scheduled database task that needs tweaking. I'm starting to go a little crazy. The problem is it obfuscates what your application is really doing. You think a perl obfuscation contest produces difficult to follow code? They got nothing on this. I realize stored procedures were the cat's meow at the time but this is beyond all human decency. I have got to start refactoring this thing before it gets out of control. diff --git a/_content/SQL-stupidity-2005-8-25.yml b/_content/SQL-stupidity-2005-8-25.yml new file mode 100644 index 0000000..6f320e8 --- /dev/null +++ b/_content/SQL-stupidity-2005-8-25.yml @@ -0,0 +1,9 @@ +title: SQL stupidity +time: 2005-08-25 04:03:39 +tags: + - ANSI-SQL + - Data + - Software-Development +content-type: html +content: | +Let me let you in on a secret. SQL is a great language for getting information out of a database. But if your writing a long string of stored procedures that call functions which use a view that ties several tables together just to find out what particular piece of data is linked to your piece of data then you need a good talking too. Not only does all this unnecessary complexity make debugging hard for you. But it makes folks like me want to beat you up with a baseball bat. So do us all a favor. See if you can reduce the number of steps and keep the squirming pathway a little straighter. Then you can scream and rant and pummel the brains out of your fellow coders (who didn't heed this warning) with the rest of us. diff --git a/_content/SSL-Phishing-and-why-you-shoul-2006-2-13.yml b/_content/SSL-Phishing-and-why-you-shoul-2006-2-13.yml new file mode 100644 index 0000000..379f128 --- /dev/null +++ b/_content/SSL-Phishing-and-why-you-shoul-2006-2-13.yml @@ -0,0 +1,6 @@ +title: SSL, Phishing, and why you should never trust your email +time: 2006-02-13 23:30:23 +section: Site-News +content-type: html +content: | +SSL & Phishing get cozy, a look at how the system broke, is a very good analysis of a recent very targeted phishing attack. The important thing to remember here is just because one big company says it's so doesn't necessarily mean it is. That little lock that appears to tell you your using an encrypted connection doesn't always mean you connected to the site you thought you were connecting to. Always make sure you're familiar with you're financial institution's websites. And it wouldn't be a bad idea to verify emails by phone too. I can guarantee you those malicious phishers are quite capable of pulling the wool over your eyes the moment you let your guard down. Take it from me. Banks that email you stuff out of the blue are bad. diff --git a/_content/Semantic-Markup??-2006-8-24.yml b/_content/Semantic-Markup??-2006-8-24.yml new file mode 100644 index 0000000..268f22e --- /dev/null +++ b/_content/Semantic-Markup??-2006-8-24.yml @@ -0,0 +1,6 @@ +title: Semantic Markup?? +time: 2006-08-24 14:53:30 +section: Site-News +content-type: html +content: | +As long as I've been doing stuff on the web I've heard the constant refrain about semantic markup. Here and then here are just two of the latest. So I thought I'd add my own thoughts to the mix. What often gets lost in these discussions is the difference between semantic markup and the display of your markup. The fellow over at Six27 makes the point that using CSS obviates any need for semantic markup as it pertains to the browser's display. He seems to think this is what makes semantic markup meaningless. However that is exactly the point of semantic markup. You can structure the data on the page any way you wish and have the presentation however you want it. This actually makes semantic markup easier. Semantic markup is not there to help you display the data. It is only there to make more information available to those who want to use it. He also makes a point that lack of universal browser support makes semantic markup useless. That would be true, if universal support was necessary to make markup useful. In point of fact, universal support is not necessary. What is necessary is a standard. You don't really care if every client supports the standard. What you do care about is if there is a standard that allows the people who wish to to take advantage of your semantic markup do so. In the end what is necessary is just support of the standard by your intended audience. The semantic markup paradigm isn't dead it's just being used under the surface by those who have a use for it. And thanks to CSS everyone else is free to ignore it if they so choose. diff --git a/_content/Small-Dev-Team-development-infrastructure-2014-09-29.yaml b/_content/Small-Dev-Team-development-infrastructure-2014-09-29.yaml new file mode 100755 index 0000000..1ea90d0 --- /dev/null +++ b/_content/Small-Dev-Team-development-infrastructure-2014-09-29.yaml @@ -0,0 +1,11 @@ +title: Small Dev Team development infrastructure. +author: Jeremy Wall +drafttime: 2014-09-29 +timeformat: 2006-01-02 +content-type: markdown +tags: + - site-news +content: | +I recently left Google to work at a small company that is just beginning to +get into the software development biz. I can't talk too much about what we +are doing. But I *can* talk about how we work. Part of the reason I came aboard was to help get the development infrastructure set up. diff --git a/_content/So-google-has-IM-2005-8-23.yml b/_content/So-google-has-IM-2005-8-23.yml new file mode 100644 index 0000000..32b35b0 --- /dev/null +++ b/_content/So-google-has-IM-2005-8-23.yml @@ -0,0 +1,6 @@ +title: So google has IM... +time: 2005-08-23 21:20:56 +section: Site-News +content-type: html +content: | +and of course being the bright young google lover I am I added it to my gmail client. you can now reach me using the jabber protocol, my gmail username, and google's new talk service. No sorry Gaim doesn't support the voice portion but my voice isn't that spectacular anyway. So what are you waiting for? get on the google bandwagon and contact me on google's talk service. diff --git a/_content/Sorry-about-the-site-downtime-2005-7-28.yml b/_content/Sorry-about-the-site-downtime-2005-7-28.yml new file mode 100644 index 0000000..5da2139 --- /dev/null +++ b/_content/Sorry-about-the-site-downtime-2005-7-28.yml @@ -0,0 +1,6 @@ +title: Sorry about the site downtime... +time: 2005-07-28 05:08:18 +section: Site-News +content-type: html +content: | +We are moving some servers and installing better UPS and Air Conditioning in our server room right now so it may go up and down every once and a while. Sorry for the inconvenience. diff --git a/_content/Strangle-your-neighborhood-MS-em-2005-12-22.yml b/_content/Strangle-your-neighborhood-MS-em-2005-12-22.yml new file mode 100644 index 0000000..7163eee --- /dev/null +++ b/_content/Strangle-your-neighborhood-MS-em-2005-12-22.yml @@ -0,0 +1,6 @@ +title: Strangle your neighborhood MS employee +time: 2005-12-22 15:05:13 +section: Site-News +content-type: html +content: | +Do the world a favor, the next M$ employee you meet don't hesitate, don't think... Just strangle the life out of him. I'm so angry right now, I could kill someone. Anyone ever tried to switch SSL certificate providers on an IIS server? Your better off switching servers. To top it off, the provider we were switching to told me Microsoft won't let them link to the knowledgebase article that walks you through it. In effect, they won't allow the Certificate authority to give instructions on the proper method. So, I'm going to tell you, now that I know. You can't generate a CSR on IIS without getting rid of your old certificate for the site. This means you have either the option of being without a certificate for the 1-7 days it takes to get a new certificate or you don't get a new certificate and just renew the old one with the current provider. Not an option for my customer. Even a day with no certificate would cause significant problems. So you have to work around it. In effect, you have two choices. Actually, they are work-arounds to a "bug" that microsoft calls a "feature." You can create a dummy server to generate the request on. Then, once you have your certificate, you can export it for use on your real server. Or, option number two, create a dummy site on the real server, use it to generate your CSR, and then import the certificate into the real site. Either way, you have to do extra steps that aren't immediately obvious because Microsoft made crummy design decisions for their software. And people wonder why I prefer Open Source. diff --git a/_content/Switching-Tabs-in-a-Tabbed-Group-2007-2-13.yml b/_content/Switching-Tabs-in-a-Tabbed-Group-2007-2-13.yml new file mode 100644 index 0000000..ca41c16 --- /dev/null +++ b/_content/Switching-Tabs-in-a-Tabbed-Group-2007-2-13.yml @@ -0,0 +1,6 @@ +title: Switching Tabs in a Tabbed Group +time: 2007-02-13 12:43:56 +section: Site-News +content-type: html +content: | +Switching Windows in a Tabbed Group in Beryl diff --git a/_content/Tech-Support-People-2005-8-18.yml b/_content/Tech-Support-People-2005-8-18.yml new file mode 100644 index 0000000..03bd007 --- /dev/null +++ b/_content/Tech-Support-People-2005-8-18.yml @@ -0,0 +1,6 @@ +title: Tech Support People +time: 2005-08-18 04:51:55 +section: Site-News +content-type: html +content: | +I've noticed something today. I can make Tech Support ladies laugh. I'm quite good at it. I've had to talk to a number of tech support folks recently and they've all been women. I don't know why but it comes in handy. If you make them laugh they are much nicer when trying to help you. Now if my wife is reading this don't worry. I'm not flirting.... much. diff --git a/_content/Texas-Attorney-General-vs-Sony-B-2005-11-21.yml b/_content/Texas-Attorney-General-vs-Sony-B-2005-11-21.yml new file mode 100644 index 0000000..549916b --- /dev/null +++ b/_content/Texas-Attorney-General-vs-Sony-B-2005-11-21.yml @@ -0,0 +1,6 @@ +title: Texas Attorney General vs Sony BMG +time: 2005-11-21 12:25:18 +section: Site-News +content-type: html +content: | +Texas Attorney General Go Texas!! diff --git a/_content/That-Guy-(Information-Hookup-p-2006-1-17.yml b/_content/That-Guy-(Information-Hookup-p-2006-1-17.yml new file mode 100644 index 0000000..8336389 --- /dev/null +++ b/_content/That-Guy-(Information-Hookup-p-2006-1-17.yml @@ -0,0 +1,6 @@ +title: "That Guy" (Information Hookup part II) +time: 2006-01-17 14:13:24 +section: Site-News +content-type: html +content: | +In an earlier post about "that guy" who can get you what you need I asked if you were the information hookup for you family, friends, and coworkers. If you are then you have probably noticed something. When the company recognizes how useful you are in that respect, you will never be left alone. It becomes a part of your unofficial job description. Saying no is just about not an option. This can get difficult. If you are lucky you will find an opportunity, in a mandate from upper managment to work on a project. You have just found a gold mine. Mine it for all it's worth. When it runs dry you'll wish you still had it. This particular mine yeilds the highly coveted "power to say no". You have an intensive job to do and it takes precedence. The information seekers will have to wait. Enjoy it while it lasts because when the project is over the avalanche will continue. diff --git a/_content/The-Eclipse-is-here-2005-7-25.yml b/_content/The-Eclipse-is-here-2005-7-25.yml new file mode 100644 index 0000000..72592ce --- /dev/null +++ b/_content/The-Eclipse-is-here-2005-7-25.yml @@ -0,0 +1,9 @@ +title: The Eclipse is here!!!! +time: 2005-07-25 00:02:50 +tags: + - Languages + - Software-Development + - User-Interface +content-type: html +content: | +Eclipse 3.1 has been released. I am a big fan of Eclipse. It is quite possibly the best all around IDE for developers out there. It may be a bit slow compared to a non java app, but has all the features you need.
  • It handles pretty much every language you can think of.
  • Works on most every platform.
  • Supports multiple versioning systems out of the box.
  • Has built in debugging support. Including a lot of great features for the web developer. Like running a web server process with the ability to step through your code.
What more can you ask for. The Eclipse GUI interface (SWT) is significantly faster than its java counterparts also. I'd highly recommend taking it for a spin. It even supports a sophisticated update and patch downloading system to make keeping it up to date easy. That's something a lot of Open Source systems are lacking lately. diff --git a/_content/The-Perl-Deployment-Kit-2005-6-2.yml b/_content/The-Perl-Deployment-Kit-2005-6-2.yml new file mode 100644 index 0000000..6b97517 --- /dev/null +++ b/_content/The-Perl-Deployment-Kit-2005-6-2.yml @@ -0,0 +1,10 @@ +title: The Perl Deployment Kit +time: 2005-06-02 01:30:17 +tags: + - Site-News + - Languages + - Perl + - Software-Development +content-type: html +content: | +So work has bought me the perl deployment kit from ActiveState. It allows me to compile perl scripts into executables. I think it builds on the experimental perlcc and family of utilities. All I can say is wow is that cool or what. It allows me take perl scripts I design to make administration tasks easier and turn them into executables for use elsewhere without installing perl on the target computer. I can make system tray applications, system services, and standalone executables. I am going to have a lot of fun with this. diff --git a/_content/The-Power-of-Modular-XHTML-2005-5-17.yml b/_content/The-Power-of-Modular-XHTML-2005-5-17.yml new file mode 100644 index 0000000..7178e5a --- /dev/null +++ b/_content/The-Power-of-Modular-XHTML-2005-5-17.yml @@ -0,0 +1,6 @@ +title: The Power of Modular XHTML +time: 2005-05-17 04:00:45 +section: Uncategorized +content-type: html +content: | +The Power of Modular XHTML XHTML1.1 has been modularized. You can take out or add pieces to it simply by creating your own DTD using the modules defined by the W3C. Ooooohhh, you say. Nifty and all that stuff. But what does that help me with exactly? Why on earth do I care? In this article I will show you one potential use for Modularized XHTML1.1. Scenario: you are writing a templating engine for your really cool web app. Or maybe you're using one of those already available ones and just defining your own template tags. Furthermore others on the project will be using this engine and your tags. Perhaps even customers will be using the tags. So how can you make it easier to use them correctly? Enter XHTML1.1 Modularized, You can define your tags in a custom DTD to extend XHTML1.1 Then when your users write their templates they can use that DTD in the Doctype declaration of the page. A validating XML editor or commandline validator can be run over their page and inform them of any well formedness errors. Or even better if they are using an on the fly validating editor with command assist generated from the DTD (like Object Factories XML Buddy for Eclipse) then they can get an easy popup of available tags along with real time error checking. So how do you accomplish this magic? You write the DTD. Making a custom DTD is easier than it looks at first. The first thing you need to know is how to define your elements (or tags if you prefer). This is done with a special DTD declaration which looks like this:
< !ELEMENT tagname (tag contents) >.
You need one of these for each of your template tags. The part in parentheses is a list of all the tags which can be inside your tag. Use EMPTY if the tag shouldn't contain any text or other tags, ANY if the tag can contain any kind of content, and PCDATA or CDATA for parsed character data or character data respectively. Next you need to define what attributes if any your tag can have. Each attribute is defined using an attlist declaration:
< !ATTLIST tagname attribute name CDATA #REQUIRED >
CDATA indicates the tag's default value should be character data. You can specify an actual value if you wish. The #REQUIRED pragma is one of several which tells the validators whether the attribute is fixed (#FIXED), assumed (#IMPLIED) or required (#REQUIRED). Lastly you need to tell the validator where all these tags fit in with the other XHTML1.1 tags. This is done through the magic of parameter entities. Parameter entites allow you to define a "variable" to represent a block of text in your DTD. These statements look like this:
< !ENTITY % Misc.extra "| tagname | second tagname">
this appends the "| tagname | second tagname" string to the Misc.extra entity which is defined in the xhtml1.1 Modules. This particular one has the effect of adding those tags as accepted content to most anywhere in the body of an xhtml document. For a detailed breakdown of all the entities defined in the XHTML1.1 Modules you can look here. Now we've defined all our tags all we have to do is link the XHTML1.1 modules in with our DTD. We do this by declaring our own parameter entity which points to the XHTMl1.1 DTD and then including that DTD into our custom DTD like so:
< !ENTITY % xhtml11.dtd PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> %xhtml11.dtd;
The first line sets the xhtml11.dtd entity equal to the contents of the file located at the url. then the subsequent line uses that entity to include the file at the bottom of ours. Now we have a working DTD for our template language. Great!!! Now how do we use it? We upload our DTD to a web location for global access or somewhere on your intranet if you only need local access. Then write your doctype definitions to point there. Like so:
< !DOCTYPE html PUBLIC "-//PhotoKit//DTD XHTML-PKTMPL 1.0//EN" "http://www.marzhillstudios.com/DTDs/PKTMPL-1.0.dtd">
Everything else in your web document will stay the same with the exception of any template tags you may put in there. It will validate in any standards based validator. and validating edtiors can perform real time error checking and perhaps even command assist for your template tags. Just another piece of value added to your app. Additional Reading: diff --git a/_content/The-Power-of-Modular-XHTML-Yet-a-2005-5-17.yml b/_content/The-Power-of-Modular-XHTML-Yet-a-2005-5-17.yml new file mode 100644 index 0000000..4edd9d7 --- /dev/null +++ b/_content/The-Power-of-Modular-XHTML-Yet-a-2005-5-17.yml @@ -0,0 +1,11 @@ +title: The Power of Modular XHTML Yet another old article popping up +time: 2005-05-17 04:13:43 +tags: + - Data + - Software-Development + - User-Interface + - X-HTML + - XML +content-type: html +content: | +Over at A List Apart They are talking about custom DTD and Modular XHTML So I thought I'd dig up an old article I wrote on the subject and share it with you yet again. The Power of Modular XHTML diff --git a/_content/The-Value-of-Open-Source-Platfor-2006-4-3.yml b/_content/The-Value-of-Open-Source-Platfor-2006-4-3.yml new file mode 100644 index 0000000..2ab2d61 --- /dev/null +++ b/_content/The-Value-of-Open-Source-Platfor-2006-4-3.yml @@ -0,0 +1,8 @@ +title: The Value of Open Source Platforms +time: 2006-04-03 22:34:45 +tags: + - Open-Source + - Software-Development +content-type: html +content: | +People ask me at times why I like Open Source so much. It's true that OSS can sometimes take a little more work up front. Sometimes, however, OSS is the only way to solve a problem quickly and with the least amount of trouble. Take my current project for example. I have a client with legacy code on an IIS server written in ASP/VBScript. Now I have nothing particular against ASP. In many ways I cut my teeth on ASP. However IIS CGI scripting has a nasty little bug which is rare but just happened to affect me. Certain Database connections will cause the IIS and the CGI application to get out of sync. The result of which is that the webserver sends absolutely nothing to the browser. No error page. No data. Nothing. Now how are you supposed to fix that? The client can't just ditch his legacy code. He can't upgrade IIS without a significant cost. And there is no Patch. There are some workarounds but none of them work predictably 100% of the time. So what is an enterprising Programmer to do? Introducing Apache as a backend server. Apache just happens to be completely unaffected by this little problem. So my solution? Set up Apache2 and Modperl to serve out the data to the IIS server. Use an MSXML object in the ASP code to retrieve the page and display it right along side the legacy code to the browser. Now the legacy code can exist side by side with the new code and In addition we can kill two birds with one stone by taking one more step toward the goal of migrating the application to an Apache/Perl/Mysql orPgSQL infrastructure. What was the advantage of OSS in this situation? It provided a solution that I could implement immediately with no licensing changes, no cost, and no problems. I didn't have to talk to a sales rep. I didn't have to talk to tech support. I just went ahead and implemented it. Now try doing that with a closed source solution. OSS has readily available solutions with little or no overhead. I guess that's why so many hacker types like the environment to work in. We like to solve problems. We don't like waiting on other people to solve them for us. diff --git a/_content/The-wayback-machine-2005-9-12.yml b/_content/The-wayback-machine-2005-9-12.yml new file mode 100644 index 0000000..7f0dacf --- /dev/null +++ b/_content/The-wayback-machine-2005-9-12.yml @@ -0,0 +1,6 @@ +title: The wayback machine +time: 2005-09-12 01:44:20 +section: Site-News +content-type: html +content: | +Apparently some German Scholar was made aware of a very old article I wrote a while back. It is featured in his german paper at a university here. I'm not sure what he was saying about it but it is interesting nonetheless. You can learn alot by looking at what google. has to say about you. Diplomarbeit_Michael_Homoet wonder what it says. anyone care to translate page 65 of this for me? I really have to dig that old article up somewhere. Update: September 12, 2005 @ 09:55 The old article is located here. http://jeremy.marzhillstudios.com/programming/archives/cat_php.html I'll see about moving to the site later. diff --git a/_content/Tootin-my-own-horn-2005-12-10.yml b/_content/Tootin-my-own-horn-2005-12-10.yml new file mode 100644 index 0000000..e364a32 --- /dev/null +++ b/_content/Tootin-my-own-horn-2005-12-10.yml @@ -0,0 +1,6 @@ +title: Tootin' my own horn +time: 2005-12-10 12:12:15 +section: Site-News +content-type: html +content: | +I have what may quite possibly be the worlds smallest and easiest to use ajax library. It's amazing what can happen when your willing to look at things in a different way than the hype recommends. I'm gonna have a lot of fun with this. Look for it in the next preview of BrickLayer diff --git a/_content/Top-X-Design-Coding-Principles-2009-8-29.yml b/_content/Top-X-Design-Coding-Principles-2009-8-29.yml new file mode 100644 index 0000000..01d2487 --- /dev/null +++ b/_content/Top-X-Design-Coding-Principles-2009-8-29.yml @@ -0,0 +1,30 @@ +title: Top X Design/Coding Principles +time: 2009-08-29 00:16:37 +tags: + - Site-News + - coding + - principles +content: | +Design/Coding rules + +1. Data and Operations should be seperate + +2. Workflows are awesome + +3. Work from the outside in + + 1. Then from the Inside out + +4. TDD TDD TDD + +5. Keep the IO partitioned and at the edges + +6. API's need to be *tight + + 1. More Permissive == more Headaches + +2. Fuzzy states are bad + +7. Every if has an else + +8. Boilerplate is bad (boring to review hard to get right) diff --git a/_content/Transactions-as-a-debugging-tool-2006-6-7.yml b/_content/Transactions-as-a-debugging-tool-2006-6-7.yml new file mode 100644 index 0000000..251dbfd --- /dev/null +++ b/_content/Transactions-as-a-debugging-tool-2006-6-7.yml @@ -0,0 +1,16 @@ +title: Transactions as a debugging tool +time: 2006-06-07 14:13:13 +tags: + - ANSI-SQL + - Data + - Software-Development +content-type: html +content: | +Have you ever wanted to test a long sql DDL script for syntax errors but didn't want to actually create your db structure yet? I've found the easiest way to do this is through the use of transactions. simply begin a transaction at the start of the script and roll it back at the end of the script. For example: +-- PostgreSQL DDL script +BEGIN; + -- begins our transaction block + CREATE TABLE test_tbl ( pk numeric NOT NULL, data varchar(128), ); +ROLLBACK; -- roll back everything this script just did +COMMIT; -- use this instead of ROLLBACK to commit the changes + This has the benefit of allowing us to test the script for errors and yet not actually run it on the DB. The EXPLAIN command can do this also on some DB's but you would need it for every statement you wrote in the script and some statements will error out if you use EXPLAIN on them. I've found the Transaction method to work best for what I want to do. diff --git a/_content/Transparency-and-Thumbnailing-2007-2-14.yml b/_content/Transparency-and-Thumbnailing-2007-2-14.yml new file mode 100644 index 0000000..b725c22 --- /dev/null +++ b/_content/Transparency-and-Thumbnailing-2007-2-14.yml @@ -0,0 +1,6 @@ +title: Transparency and Thumbnailing +time: 2007-02-14 13:47:43 +section: Site-News +content-type: html +content: | +Video playing through transparency and in a thumbnail diff --git a/_content/Twitter-and-the-Saga-in-140-char-2009-4-10.yml b/_content/Twitter-and-the-Saga-in-140-char-2009-4-10.yml new file mode 100644 index 0000000..38dc3e7 --- /dev/null +++ b/_content/Twitter-and-the-Saga-in-140-char-2009-4-10.yml @@ -0,0 +1,9 @@ +title: Twitter and the Saga in 140 character chunks +time: 2009-04-10 13:47:53 +tags: + - Site-News + - sagas + - twitter +content-type: html +content: | +I've been on twitter for a while now andam currently up to 1789 posts. I'm a snippet twitterer. I typically don't try to connect tweets to previous tweets, but I'm fascinated by the people who do. Brent Spiner for instance is at this moment twittering a saga of two women in chunks of 140 characters or less. I'm actually somewhat on the edge of my seat. Which is pretty decent for twittering. Perhaps I should consider doing some saga's of my own. First though I'll need to have something saga worthy happen to me. diff --git a/_content/Update-on-work-2007-7-10.yml b/_content/Update-on-work-2007-7-10.yml new file mode 100644 index 0000000..d1bc637 --- /dev/null +++ b/_content/Update-on-work-2007-7-10.yml @@ -0,0 +1,6 @@ +title: Update on work +time: 2007-07-10 04:56:19 +section: Site-News +content-type: html +content: | +So I'm finally starting to get settled at the new job and I thought I'd let folks know how things are going. I'm liking the SCRUM Process for development and the AGILE emphasis here. I'm also finding a surprising number of ways to contribute. So far I've been useful in helping with resolving SVN branching conflicts. Developing an XML Schema. And Javascript Library standardization recommendations. In fact I may be doing some roundtable Dev learning sessions in the next month or so on some of those topics for the other devs which should be fun. I have some pending articles on here to finish up and some features to release for Bricklayer and Metabase when I get some time to do so. Which means this space should soon be experiencing some more activity. diff --git a/_content/Useability---A-case-in-point-2005-5-25.yml b/_content/Useability---A-case-in-point-2005-5-25.yml new file mode 100644 index 0000000..cbc46f8 --- /dev/null +++ b/_content/Useability---A-case-in-point-2005-5-25.yml @@ -0,0 +1,6 @@ +title: Useability - A case in point +time: 2005-05-25 22:58:48 +section: Site-News +content-type: html +content: | +My wireless company, whose name I won't reveal, has a design problem on their site. I like the company overall and have no real complaints with their service. But they did make a rather serious mistake in their online bill pay. I'm not telling them who they are since I want to give them a chance to fix it before smearing their name all over the web. Who knows maybe they will fix now that I've complained. Their customer service site doesn't really support any browser other than IE. That means firefox, Safari, Konqeror, Opera, and others can't be guaranteed to work. They know this. They didn't however elect to tell me. Or any other user for that matter. What's worse is that the site fails silently. Now this is a bad for most every application but it's especially bad for online bill pay. If you go to pay a bill online and it gives no indication that the payment didn't go through It has the potential to cause problems. Maybe even serious problems. So take note: when designing browser specific web apps make sure your users know the limitations. A notice, pop-up box, or even a cryptic error message in the page are all more desireable than silent failures. I'm in the middle of designing an ecommerce site myself right now and I intend to make sure that when it comes to peoples money they know when something didn't go right. Lessons learned from other peoples mistakes. diff --git a/_content/Using-Reusable-AJAX-Gateways-2005-12-2.yml b/_content/Using-Reusable-AJAX-Gateways-2005-12-2.yml new file mode 100644 index 0000000..7c40ddf --- /dev/null +++ b/_content/Using-Reusable-AJAX-Gateways-2005-12-2.yml @@ -0,0 +1,73 @@ +title: Using Reusable AJAX Gateways +time: 2005-12-02 15:43:31 +tags: + - APIs + - javascript + - Languages + - Software-Development + - User-Interface + - XML +content-type: html +content: | +So now I have a reusable ajax gateway. Just what exactly am I supposed to do with it? If you look around for a while you will start to notice everyone describing how you can use XSLT, SOAP, and all these other things to pass Objects back and forth. And again they all have suggestions for libraries you can use to do this in. But what if your not quite that ambitious? What if you wanted the speed and power and downright fun of using AJAX without all the huge libraries? Well as usuall I have an idea. You see what I really want to do with this is to retrieve pieces of html pages from the server to put into my current page. Simple enough right? Why I could just use cloneNode from the DOM api to do that. In fact if you looked at my example code from before you saw that I did exactly that. There's just one problem though. The cloned elements and test show up on your page alright but they aren't part of you html document. In fact the element don't obey any of your html rendering engines rules. It's as if you just went about making up fake tags to put in there. They don't do anything. What we need is a way to take our xml document and duplicate it's structure in our html document. duplicate_nodes(); to the rescue!!! I wrote a small function that takes our html fragments (as I call them) and duplicates them in our pages document. Here is how I did it: + + function duplicate_nodes(node) { // get our node type name and list of children + // loop through all the nodes and recreate them in our document + //alert('calling duplicate_nodes: ' + node.nodeName + ' type: ' + node.nodeType); + var newnode; + if (node.nodeType == 1) { + //alert('element mode'); + newnode = document.createElement(node.nodeName); + //alert('node added'); + newnode.nodeValue = node.nodeValue + //test for attributes + var attr = node.attributes; + var n_attr = attr.length + for (i = 0; i < n_attr; i++) { + newnode.setAttribute(attr.item(i).name, attr.item(i).nodeValue); + alert('added attribute: ' + attr.item(i).name + ' with a value of: ' + attr.item(i).nodeValue); + } + } else if (node.nodeType == 3 || node.nodeType == 4) { + //alert('text mode'); + try { + newnode = document.createTextNode(node.data); + //alert('node added'); + } catch(e) { + alert('failed adding node'); + } +} while (node.firstChild) { + if (newnode) { + //alert('node has children'); + var childNode = duplicate_nodes(node.firstChild); + //alert ('back from recursive call with:' + childNode.nodeName); + newnode.appendChild(childNode); + node.removeChild(node.firstChild); + } + } + return newnode; +} Now this functions currently only handles elements, their attributes, and text or cdata nodes. entity and other node type support can be added easily however. Also I still need to do some testing on the attribute handling to see if it correctly handles stuff like eventhandlers and id attributes but it works. (Edit: It handles event handlers with no modification on firefox) Lets do like all good code hackers do and take it apart :-) Our first task in this function is to see what kind of node we are handling. This is contained the in the nodeType property of the node object. When this is a 1 it's an element. When it's a 3 or 4 it's CDATA or a Text node. Thus our if statements: if (node.nodeType == 1) { } else if (node.nodeType == 3 || node.nodeType == 4) { } Elements and Text or CDATA have to be handled very differently so we check for these two types before doing anything else. In the case of an element node (type 1) we need two more peices of information: node.nodeName and node.nodeValue These provide us with the details we need when recreating our element in the html document. They are pretty well self explanatory one is the name or tagName of the element and the other is the elements value. Now we are ready to start creating our new element in the current document like so: newnode = document.createElement(node.nodeName); +//alert('node added'); +newnode.nodeValue = node.nodeValue + Now how do we handle it's attributes? A simple for loop will do that for us. the attributes property gives us a list of the nodes attributes. The calling the length property for that list gives us how many attributes there are. And the for loop loops through each one duplicating it in our newnode like so: //test for attributes var attr = node.attributes; +var n_attr = attr.length +for (i = 0; i < n_attr; i++) { + newnode.setAttribute(attr.item(i).name, attr.item(i).nodeValue); + alert('added attribute: ' + attr.item(i).name + ' with a value of: ' + attr.item(i).nodeValue); +} + And that's all we need to recreate our element and its attributes. Text nodes are even easier to handle. you just need one piece of information for them. The data property. create a new text node using the document.createTextNode method with the node.data property and your good to go: //alert('text mode'); +try { + newnode = document.createTextNode(node.data); + //alert('node added'); +} catch(e) { + alert('failed adding node'); +} + There is just one last thing to take care of though. What if our node has children? What do you do then? Function Recursion to the rescue!! The firstChild property of a node will tell us if there are any children and a while loop will keep looping as long as it returns true. All we have to do is:
  • call duplicate_nodes recursively with that child as an argument
  • append the returned node to the newnode
  • remove each child from the node
  • and keep looping till no more children exist
Here is the while loop: while (node.firstChild){ + if (newnode) { + //alert('node has children'); + var childNode = duplicate_nodes(node.firstChild); + //alert ('back from recursive call with:' + childNode.nodeName); + newnode.appendChild(childNode); + node.removeChild(node.firstChild); + } +} + The last task of our function is to return the duplicated node return newnode; our duplicate function does not append the node anywhere in our document so it won't show up. That is the job of the calling function. It can append the new node where ever it wants. diff --git a/_content/Valuable-advice-from-The-Narrato-2005-11-29.yml b/_content/Valuable-advice-from-The-Narrato-2005-11-29.yml new file mode 100644 index 0000000..775d917 --- /dev/null +++ b/_content/Valuable-advice-from-The-Narrato-2005-11-29.yml @@ -0,0 +1,6 @@ +title: Valuable advice from The Narrator +time: 2005-11-29 12:32:22 +section: Site-News +content-type: html +content: | +http://www.hivelogic.com/articles/2005/11/29/using_usr_local Listen to the man he speaks wisdom :-) diff --git a/_content/Web-20---Paul-Graham-nails-it-2005-12-9.yml b/_content/Web-20---Paul-Graham-nails-it-2005-12-9.yml new file mode 100644 index 0000000..fbf988b --- /dev/null +++ b/_content/Web-20---Paul-Graham-nails-it-2005-12-9.yml @@ -0,0 +1,6 @@ +title: Web 2.0 - Paul Graham nails it +time: 2005-12-09 15:42:08 +section: Site-News +content-type: html +content: | +Some people just have a natural ability to distill exactly what you've been thinking and clearly communicate it. Anyone who has been wondering what this whole Web 2.0 thing is all about. Go read this article. Web 2.0 diff --git a/_content/Welcome-to-the-new-beginning-2005-4-6.yml b/_content/Welcome-to-the-new-beginning-2005-4-6.yml new file mode 100644 index 0000000..edd2530 --- /dev/null +++ b/_content/Welcome-to-the-new-beginning-2005-4-6.yml @@ -0,0 +1,6 @@ +title: Welcome to the new beginning... +time: 2005-04-06 09:49:43 +section: Site-News +content-type: html +content: | +Sorry for the site downtime. I was migrating to a new server and had some problems. Anyway watch this space for an upcoming redesign and refit of the site. I'm using a new software and have a new purpose. For those of you who follow my wife's site she suffered the same problems with the move. I think I've just about finished sorting out the mess though so it won't be long. I'll get my old archives and her site up soon. diff --git a/_content/Well-Designed-Software-and-the-L-2006-9-1.yml b/_content/Well-Designed-Software-and-the-L-2006-9-1.yml new file mode 100644 index 0000000..de7babd --- /dev/null +++ b/_content/Well-Designed-Software-and-the-L-2006-9-1.yml @@ -0,0 +1,6 @@ +title: Well Designed Software and the Last Hill +time: 2006-09-01 11:15:20 +section: Software-Development +content-type: html +content: | +There comes a time in your development cycle where if you have designed your system well you start to see a payoff. If you have a good seperation between data, logic, and display and your system is logically laid out then near the end of the cycle as you approach launch date you will start to notice a trend. The customer will make a request and you will find that you can implement the change more and more quickly. If you need to make a business logic change you only have to go one place to make it and you don't have to double check a thousand other places to make sure the change didn't break something. Page layout? same thing. Database layer changes? Ditto. You start to experience 5 and 10 minute turnarounds on minor changes and 1-2 hour turnarounds on larger ones. Enjoy this period while you can. It's a very good place to be. diff --git a/_content/Whats-your-Development-process?-2006-1-20.yml b/_content/Whats-your-Development-process?-2006-1-20.yml new file mode 100644 index 0000000..136b8cc --- /dev/null +++ b/_content/Whats-your-Development-process?-2006-1-20.yml @@ -0,0 +1,6 @@ +title: What's your Development process? +time: 2006-01-20 16:08:01 +section: Software-Development +content-type: html +content: | +I've tried all those development process programs and tools. UML, Case Tools, Flowcharting. You name it I've investigated. Just part of my nature I guess. I like new toys and investigating or learning about new things. When it comes to serious development work though I've really only found one process that works for me. Just Code It. I start creating the structure of my app. Code the Class and Object definitions. Identify the needed functions. While doing this I write pseudocode in the form of comments in these structures. Then I start coding. It's a highly iterative process but it works for me and makes me far more productive than anything else I've found. I also tend to write Black Box code. Each of the pieces has a clear way of being used and each can be coded without knowing the internals of the other pieces. This way I can test and code each piece easily without having to worry about the rest of the app. I'm not sure what you'd call this process. It bares some similarity to Agile or Extreme programming processes I suppose but really it's just an extension of the way I think. So... What's your development process? diff --git a/_content/WikiWyg-2005-8-26.yml b/_content/WikiWyg-2005-8-26.yml new file mode 100644 index 0000000..7ead28f --- /dev/null +++ b/_content/WikiWyg-2005-8-26.yml @@ -0,0 +1,12 @@ +title: WikiWyg +time: 2005-08-26 01:03:28 +tags: + - Site-News + - APIs + - CSS + - javascript + - Software-Development + - X-HTML +content-type: html +content: | +Wykiwyg With the higher profile, dynamic javascript pages are getting, look to see a lot more folks working on this stuff. And this is a wonderful example of what you can do. diff --git a/_content/Wordpress-Templating-sucks-2006-5-3.yml b/_content/Wordpress-Templating-sucks-2006-5-3.yml new file mode 100644 index 0000000..9ede3bb --- /dev/null +++ b/_content/Wordpress-Templating-sucks-2006-5-3.yml @@ -0,0 +1,6 @@ +title: Wordpress Templating sucks... +time: 2006-05-03 10:40:31 +section: Site-News +content-type: html +content: | +Now don't get me wrong. I really like the wordpress UI for posting and managing pages and posts and comments and spam. However they are sadly sadly lacking in the templating department. I have discovered this in the last few days after attempting to modify a template. I really dislike the way it all works. You aren't templating your coding in PHP. and it's a little obtuse that way. It's probably just a matter of preference. Perhaps I should right a templating plugin. If that's possible. I wonder.....? diff --git a/_content/Would-you-like-a-little-Moose-wi-2007-3-20.yml b/_content/Would-you-like-a-little-Moose-wi-2007-3-20.yml new file mode 100644 index 0000000..5653194 --- /dev/null +++ b/_content/Would-you-like-a-little-Moose-wi-2007-3-20.yml @@ -0,0 +1,18 @@ +title: Would you like a little Moose with that? +time: 2007-03-20 15:57:54 +tags: + - Site-News + - APIs + - Open-Source + - Perl + - Software-Development +content-type: html +content: | +I have found a great new tool in CPAN. Moose is an extension of the perl OO system. It implements metaclasses in an intuitive and easy to understand way. Getting started with Moose couldn't be easier. Make sure you use v0.18 though because the previous version has a few quirks that get in the way. Basically to get started all you need to get started is install Moose from CPAN. I recommend doing so from CPAN because a lot of the distros are behind a little and 0.18 has some fixes that will make your life a great deal easier when using Moose. Once you've installed Moose it's time to start building your classes. In this tutorial I'm going to highlight the most used (in my opinion) features of Moose.
  • has
  • after
  • subtype
A class is composed of attributes and operations for the most part. Some OO systems further divide these into private and public methods/attributes. For this tutorial though we will just keep it at those two. The first thing you have to do in your class is use Moose at the top to import the Moose package. <>package MyClass; +use Moose; Now your commited. That use Moose statement has forever altered the way you write this class. Well not really you can turn it off but lets not worry about that right now. Moose does a lot of heavy lifting for you in the background so it's best to just use the moose feature set to build your class from here on out. The first thing we need to do is start writing your classes attributes. For this we will need our handy dandy 'has' function. 'has' will create our attributes for us and automatically do type restriction, create accessor methods, and police reading and writing to them. So what does 'has' need? It needs two arguments a scalar and a list of key value pairs with a minimum of at least one but I recommend 2 keys at a minimum. The scalar is the name of the attribute. The list describes the attribute for Moose so it knows how to set it up for you. The first key, and the absolute must, is the 'isa' key. This key tells Moose what type to restrict the attribute to. It can be a class name or predefined subtype. You can see a list of Moose's of already defined subtypes for us Here : http://search.cpan.org/~stevan/Moose/lib/Moose/Util/TypeConstraints.pm#Default_Type_Constraints You should be able to use those to get started. The second key Moose needs is the 'is' key. This key tells Moose how to police reading and writing to this attribute. A value of 'rw' in this key tells Moose this attribute can be read and written. A value of 'ro' in the key tells Moose this attribute can only be read and not written to. So lets add a couple of attributes to this class. has 'Name' => (is => 'rw', isa => 'Str'); +has 'Purpose' => (is => 'ro', isa => 'Str'); + Now our class instances can have a name and a purpose. We can see that both attributes are strings and that the name is readable and writable while the purpose is only readable. This is actually a fully functional class now. It only works to store things since we don't have any operations yet but it is fully useable. A few things to keep in mind is that Moose automatically adds Moose::Object as your classes base class. So you do have a few operations: ->new() is a constructor that Moose::Object provides for you. This allows Moose to properly set up your classes accessors and constraints for you. We could use this class now by calling my $obj = MyClass->new(); We can set our name by calling $obj->Name('test'); We can retrieve that name by calling my $name = $obj->Name(); But wait our Purpose attribute is not writable and nothing is stored in it. How on earth do we get something in there? Ahhh, now we come to Mooses little secret: You don't have to use the methods to access those attributes. $obj->{Purpose} = 'To Show off Moose'; will work just as well with a few caveats. (You see it really is just an extension of the Perl OO system.) Moose doesn't do any policing when you access them this way. This means people who use your class and regard it properly as a black box shouldn't be doing this. Which brings us to two more really handy keys in our descriptive list we are passing to has: 'default' and 'required'. Up to now our attributes have not been restricted from being undefined. setting the required => 1 key/value pair in our list takes care of that. If we do that though then we have to define a default value for the attribute or our class will error on compile with Moose telling us that the attribute is undefined. That's what the default key is for. default => 'To Show off Moose' will set the default value for the attribute to our string. Now our class looks like this: package MyClass; +use Moose; +has 'Name' => (is => 'rw', isa => 'Str'); +has 'Purpose' => (is => 'ro', isa => 'Str', default => 'To Show off Moose', required => 1); + When we create a new instance of our class with my $obj = MyClass->new(); our Purpose attribute will be preset for us and not modifiable (without breaking the rules which you would never do of course). That's enough for this post. I'll be posting next on the benefits of after and subtype when it comes to your attributes and Object Integrity. diff --git a/_content/XML-Menus-and-PHP-2005-9-12.yml b/_content/XML-Menus-and-PHP-2005-9-12.yml new file mode 100644 index 0000000..3792760 --- /dev/null +++ b/_content/XML-Menus-and-PHP-2005-9-12.yml @@ -0,0 +1,6 @@ +title: XML, Menus, and PHP +time: 2005-09-12 20:50:08 +section: Uncategorized +content-type: html +content: | +XML, Menus, and PHP The menu you see on the left is dynamically driven from an XML File using PHP. (that menu is no longer there but go with it anyway -ED) Why, you ask? What would possess me to do such a thing? Very simple, I didn't want to pay for a DB server on the host. Also I wanted it to be dynamically driven. The answer? A text file. But how to easily get information out of the text file and onto the page? Again an answer presents itself....
  1. XML: A way to store and parse data in text files

    I was going to use XML to store my Data. It offered the following benefits: It was a text file, and it had a number of ready to use parsers in all the common Server Scripting platforms. I could use the file anywhere. The first step, of course, was to decide on the XML elements to use and how they would be used in the document. I had to write an XML spec of sorts so I knew how to interpret the file.

    1. First, I needed a Root element. XML documents require a document root element in order to be valid. We'll call that element the "map" element. After all, this document is going to amount to a sitemap of sorts. Which brings us to a side benefit of using XML. I can use the same file to generate a dynamic sitemap should I wish to and so can anyone else. I can provide this document publicly and anyone can host a way to get anywhere on my site from theirs. Who knows? It may be useful some day. Right now, our document looks like this:

      < ?xml version='1.0' ?>;   
    2. Next, we need to have an element that holds all the data about one link. We'll call that the "section" element since it describes a section of the site. We also need elements inside this element to hold all the pieces we need to know to build our menu. In this case we are storing the link, the description, and the name of each link for the menu. Those elements are:

      1. link
      2. description
      3. name
      So now our document looks like this:
      < ?xml version='1.0' ?>  
    3. Each section element can be repeated as many times as necessary. The section element can only hold one link, description, and name element. That concludes our specs for the XML document.

  2. The Parser.

    Now I had to select a parser to use, my platform for development at the time happened to be PHP. So what did PHP offer in the way of XML parsers? PHP actually had two parsers available to use. One was a SAX parser and the other is a DOM parser. At the time of this project, however, only the SAX Parser was included in the default distribution of PHP. So Sax it was.

    SAX parsers are event driven parsers. They are simpler to learn but harder to implement than DOM parsers. Event driven parsers work by firing events when something happens as it goes through the text line by line. The events that fire are:

    1. element start
    2. element end
    3. cdata

    So, you write handlers for each element and the parser calls them as each event fires. First, we need to create a parser object using $parser = xml_parser_create(). Then, we need to create the handler functions, and assign them to the parser object using xml_set_element_handler($parser, 'startElement', 'endElement'). We also need to set our parser options using xml_parser_set_option($parser, XML_OPTION_CASE_FOLDING, 0). Here is our php code so far:

     $currentElement = ""; // Begin code to create menu from xml file. $varFileXmlFile = "xml_databases/sitemap.xml"; $xmlFile = fopen ($varFileXmlFile, "r"); $xmlString = fread ($xmlFile, filesize ($varFileXmlFile) ); $strMenu = ""; $currentElement = ""; $name = "1"; $link = "2"; $title = "3"; function startElement ($parserHandle, $elementName, $attributes) { // declare the global variables here } // function to handle the end of an element function endElement ($parserHandle, $elementName) { //declare global variables here } // function to handle the data in an element. function cData ($parserHandle, $cdata) { //declare global variables here } $parser = xml_parser_create(); xml_parser_set_option($parser, XML_OPTION_CASE_FOLDING, 0); xml_set_element_handler($parser, 'startElement', 'endElement'); xml_set_character_data_handler($parser, 'cData');  

    Now we need to decide what happens when a start element is reached in our document. Since our goal is to retrieve the info from certain elements when they arrive, all we really need to know from this event is what element we are on. When the parser runs the startElement function it passes in the parser handle, element name and attribute list. All we have to do is add code to the function that stores the element's name in a global variable for other functions to use.

    function startElement ($parserHandle, $elementName, $attributes) { // declare the global variables here global $currentElement; //identify current element here $currentElement = $elementName; } 

    We declare $currentElement using the global keyword so the function will use the variable declared earlier in the script instead of creating a function specific variable to store it. We want other functions to be able to access the current element so they know what element they are working on.

    We also need to decide what happens when an end element is reached in our document. We only really care about the end element for section elements since this is when we will store the values we gathered from the child elements for section. When the parser fires the endElement function it passes in the parser handle and the element name. So all we have to do is add a test to see if its a section end element, and then output the values we will store using the CDATA handler. Additionally we need to clear the $currentElement variable since we are no longer in that element any longer.

    // function to handle the end of an element function endElement ($parserHandle, $elementName) { //declare global variables here global $currentElement; global $strMenu; global $name; global $title; global $link; $currentElement = ""; if ($elementName == "section") { $strMenu .= "
  3. " $strMenu .= $name . "
  4. \n"; } }

    Again we declared our variables with the global keyword so we would be able to retrieve the data from the variables we will store globally using the CDATA handler. In our case we want to output list item elements for inclusion in an unordered list later.

    The last event we have to handle is when CDATA is reached. CDATA is text data that is not an XML element. In other words it's the data the elements are holding for us. When the cData Function is called by the parser it passes in the parser handle and the value it retrieved. For our purposes, we need to do something different with the data depending on which element we are inside of. If we are in a link element we store the value in our link variable and so on for all the other section sub elements like this:

    // function to handle the data in an element. function cData ($parserHandle, $cdata) { //declare global variables here global $currentElement; switch ($currentElement) { case "link" : global $link; $link = $cdata; break; case "description" : global $title; $title = $cdata; break; case "name" : global $name; $name = $cdata; break; default : break; } } 

    Now that we have handled all the events, we are ready to retrieve our XML file. So, we run the parser with the stored string from our XML file.

    $varFileXmlFile = "xml_databases/sitemap.xml"; $xmlFile = fopen ($varFileXmlFile, "r"); $xmlString = fread ($xmlFile, filesize ($varFileXmlFile) ); xml_parse($parser, $xmlString, true); 
  5. Now how do we use it, you ask? Simple, just include the source code in our page and output the contents of the $strMenu variable into an unordered list and style to suite. The complete source code for this project is shown below. Feel free to copy and use if you wish, but make sure you give me some credit first.
    // Begin code to create menu from xml file. $varFileXmlFile = "xml_databases/sitemap.xml"; $xmlFile = fopen ($varFileXmlFile, "r"); $xmlString = fread ($xmlFile, filesize ($varFileXmlFile) ); $strMenu = ""; $currentElement = ""; $name = "1"; $link = "2"; $title = "3"; // create handler functions here // function to handle the beginning of an element function startElement ($parserHandle, $elementName, $attributes) { // declare the global variables here global $currentElement; //identify current element here $currentElement = $elementName; } // function to handle the end of an element function endElement ($parserHandle, $elementName) { //declare global variables here global $currentElement; global $strMenu; global $name; global $title; global $link; $currentElement = ""; if ($elementName == "section") { $strMenu .= "
  6. " $strMenu .= $name . "
  7. \n"; } } // function to handle the data in an element. function cData ($parserHandle, $cdata) { //declare global variables here global $currentElement; switch ($currentElement) { case "link" : global $link; $link = $cdata; break; case "description" : global $title; $title = $cdata; break; case "name" : global $name; $name = $cdata; break; default : break; } } $parser = xml_parser_create(); xml_parser_set_option($parser, XML_OPTION_CASE_FOLDING, 0); xml_set_element_handler($parser, 'startElement', 'endElement'); xml_set_character_data_handler($parser, 'cData'); xml_parse($parser, $xmlString, true);
Additional Reading: diff --git a/_content/Yet-another-who-doesnt-get-it-2005-9-18.yml b/_content/Yet-another-who-doesnt-get-it-2005-9-18.yml new file mode 100644 index 0000000..8584069 --- /dev/null +++ b/_content/Yet-another-who-doesnt-get-it-2005-9-18.yml @@ -0,0 +1,8 @@ +title: Yet another who doesn't get it... +time: 2005-09-18 06:30:50 +tags: + - Site-News + - Open-Source +content-type: html +content: | +Stephen J Marshall CEng MBCS CITP betrays his ignorance in an article I found through Slashdot. Now I'm not a OSS fanatic for the most part. I use it and can't justify paying for software when I don't have to. But I don't go around yipping about how Closed Source is BAD. Folks like Mr. Stephen Marshall however really get my goat. Either they don't understand or they don't want to adapt. Let's take a look at his points one at a time.
  1. Intellectual Property:
    Mr. Marshall brings up a favourite recent trend in OSS detractors. IP or the percieved lack of it. He brings up a point about how british patent law may conflict with most paid OSS volunteers. Namely that all the work a british programmer does is owned by his employer, Even work done after hours on his own time. All well and good sounds like the british have an IP law to fix. How does this affect the rest of the world though? Furthermore, most programmers who contribute to OSS as part of their jobs are specifically paid to do so. In those cases the company is expressly releasing their patent rights to the work by giving their programmers time to the project. In fact this is the primary form of quality OSS development. Companies like IBM, Hewlett Packard, even Dell pay programmers to work on OSS. They see it as a viable way to compete with or escape the stranglehold of a software monopoly. Welcome to the free market economy. No monopoly can exist. Someone somewhere will find a way to compete even if it means giving away their software for free.
  2. Conceptual Integrity:
    I am so tired of this argument. Mr. Marshall has obviously never tried to contribute to a thriving, quality OSS application. Believe me it's not a free for all. They don't give away CVS Commit access to just anyone. You have to prove your worth first. And there most definitely is a Gatekeeper. In the case of the Linux kernel its Linus Torvalds. And most every other major OSS endeavor out there has one too. The Gatekeeper decides what patches to take and what to leave. He decides what programmers ideas he want's to include. Yes you might fork the code to do your idea but it won't make it into the "official" version it will be a different piece of sofware. If you fork apache it's not apache anymore. And people will know it. Conceptual integrity is not missing in OSS, nor does the OSS process make it impossible to implement.
  3. Professionalism:
    Again he shows either a complete lack of knowledge concerning OSS or this is blatant misinformation. Let's look at an example. The Eclipse IDE. IBM sponsored it. A foundation monitors it. And it is quite possibly the most useful, powerful, and quality IDE out there. It also happens to be Open Source. How did an open source App get like this? Simple, IBM got it there. OSS is just another development and licensing process for a company to use. It is not a hippie free love fest with anti corporation sentiment as a required component. Somehow I think the folks that IBM pays to work on Eclipse are professional about it.
  4. Innovation:
    I don't even know where to start on this one. X-Windows is the only windowing system I know of that is network transparent from the ground up. It's been open source since the beginning. Firefox? Again, arguably one of the most innovative browsers out there. What do all these apps have in common? They have commercial backing. This guy writes as if OSS has no, and never will have any commercial backing. OSS is here to stay. The free market demanded it. Microsoft's monopoly incubated it. And now, like all segments of a free market eventually do, software development is evolving. This is not a problem, it's just the natural progression of a free market economy at work..
So what is my point? OSS is here to stay. It's time to stop worrying about it and start thinking about how you can use it, make money at it and succeed in the evolving marketplace of software development. IBM has figured it out, Novell has figured it out, and eventually Mr. Steven Marshall will figure it out or become marginalized for his failure to adapt to the new marketplace. diff --git a/_content/been-busy-2006-1-15.yml b/_content/been-busy-2006-1-15.yml new file mode 100644 index 0000000..aea8702 --- /dev/null +++ b/_content/been-busy-2006-1-15.yml @@ -0,0 +1,6 @@ +title: been busy.... +time: 2006-01-15 23:25:03 +section: Site-News +content-type: html +content: | +But I do have a few posts brewing in my head and taking shape so watch this space. diff --git a/_content/custom-dtd-modular---Google-Sear-2005-9-12.yml b/_content/custom-dtd-modular---Google-Sear-2005-9-12.yml new file mode 100644 index 0000000..23dc39d --- /dev/null +++ b/_content/custom-dtd-modular---Google-Sear-2005-9-12.yml @@ -0,0 +1,10 @@ +title: custom dtd modular - Google Search +time: 2005-09-12 21:12:24 +tags: + - Site News + - Software Development + - User Interface + - X-HTML +content-type: html +content: | +custom dtd modular - Google Search Apparently I'm ranked just below a-list-apart on the subject of Modular XHTML and custom DTD's. How I got there I don't know. Go check out the article if you want It's kind of interesting in an esoteric way :-) diff --git a/_content/dependency-injection-with-guice-sucks.yml b/_content/dependency-injection-with-guice-sucks.yml new file mode 100644 index 0000000..6df2206 --- /dev/null +++ b/_content/dependency-injection-with-guice-sucks.yml @@ -0,0 +1,14 @@ +title: Dependency Injection with Guice Sucks +time: 2010-07-14 +timeformat: 2006-01-02 +content: | +I've learned to hate Guice lately. + +* It decouples the wrong things. +* It hides details you care about. +* It doesn't do anything a Factory class wouldn't do better. +* It locks you in (Nobody leaves Guice... No Body) + +Therefor +* It's harder to reason about. + diff --git a/_content/dtach-2006-6-22.yml b/_content/dtach-2006-6-22.yml new file mode 100644 index 0000000..eabe26a --- /dev/null +++ b/_content/dtach-2006-6-22.yml @@ -0,0 +1,6 @@ +title: dtach +time: 2006-06-22 15:14:19 +section: Site-News +content-type: html +content: | +the stripped down win32 GNU Screen binary for Cygwin diff --git a/_content/erlang-datetime-utility-function-2009-5-1.yml b/_content/erlang-datetime-utility-function-2009-5-1.yml new file mode 100644 index 0000000..30c7e4e --- /dev/null +++ b/_content/erlang-datetime-utility-function-2009-5-1.yml @@ -0,0 +1,13 @@ +title: erlang datetime utility functions +time: 2009-05-01 01:17:21 +tags: + - Site-News + - datetime + - erlang + - utility-functions +content: | +I have been doing some work on timeseries data in erlang for iterate graphs and +found the calendar module in stdlib to be lacking in some useful features. +So being the helpful hacker I am I created a utility module to wrap calendar +and be a little more userfriendly. May I present date_util.erl: + + + diff --git a/_tmpl/templates.clj b/_tmpl/templates.clj new file mode 100644 index 0000000..9e190e8 --- /dev/null +++ b/_tmpl/templates.clj @@ -0,0 +1,168 @@ +(use 'com.marzhillstudios.molehill.template) +(use 'com.marzhillstudios.molehill.config) +(use 'net.cgrand.enlive-html) +(use '[com.marzhillstudios.molehill.file :only [is-hill-file? hill-file-slug]]) +(use '[clojure.contrib.duck-streams :only [file-str]]) + +(def my-page-resource (file-str "_tmpl/page.html")) +(def my-feed-resource (file-str "_tmpl/feed.rss")) + +;TODO(jwall): strip the site-root from the paths on these resources +(def my-css-resources (cons (file-str (:site-root *site-config*) "/static/styles/shThemeDefault.css") + (cons (file-str (:site-root *site-config*) "/static/styles/shCore.css") + (filter #(= (take-last 3 (.toString %1)) '(\c \s \s)) + (file-seq (file-str (:site-root *site-config*) "/static/css")))))) +(def my-js-brushes + (filter #(= (take-last 2 (.toString %1)) '(\j \s)) + (file-seq (file-str (:site-root *site-config*) "/static/brushes")))) +(def my-js-syntax-core-resources (vector (file-str (:site-root *site-config*) + "/static/scripts/shCore.js"))) +(def my-js-resources (concat my-js-syntax-core-resources my-js-brushes)) + +(defn create-tag-link + [tag] + (cond + (isa? (type tag) clojure.lang.IPersistentVector) + (format "%s" + (str "tagsize" (nth tag 2)) + (str "/" (nth tag 1) "/") + (nth tag 1)) + :else + (format "%s" + (str "tagsize" 1) + (str "/" tag "/") + tag))) + +(defn create-tag-links + [tags] + (for [tag tags] + (create-tag-link tag))) + +(defn- do-tags + [tags] + (html-snippet (apply str (mapcat #(str % " ") (create-tag-links tags))))) + +(defn do-tags-append + [tags] + (append (do-tags tags))) + +(defn- do-tags-content + [tags] + (content (do-tags tags))) + +(defn create-post-link + [post] + (format "%s" + (str "/" "/") + (nth post 1)) + post) + +(defn mk-link + [txt href] + (html-snippet (format "%s" href txt))) + +(defn do-posts + [posts] + (clone-for [post (take 5 posts)] + [:h1] (content (mk-link (:title post) + (str "/entries/" (hill-file-slug post) "/"))) + [:div.datetime :span.post-time] (content (:date (:date post))) + [:div.post-body] (html-content ((:parsed-content post))) + [:div.tags] (do-tags-append (:tags post)))) + +(defn do-feed-items + [posts] + (clone-for [post posts] + [:title] (content (:title post)) + [:link] (content (str "/entries/" (hill-file-slug post) "/")) + [:category] (do-tags-append (:tags post)) + [:description] (html-content ((:parsed-content post))) + [:pubDate] (html-content (:date (:date post))))) + +(defn strip-site-root + [config res] + (apply str (drop (count (:site-root config)) res))) + +(defn do-links + [resources res-type res-media res-rel] + (clone-for [res resources] + (set-attr :href (strip-site-root *site-config* (.toString res)) + :type res-type :media res-media :rel res-rel))) + +(defn do-scripts + [resources res-type] + (clone-for [res resources] + (set-attr :src (strip-site-root *site-config* (.toString res)) :type res-type))) + +(deftemplate my-entry-page my-page-resource [state] + [:head :title] (content (str (:site-name state) "-" + (:title (first (:entries state))))) + [:head :script] (do-scripts my-js-resources "text/javascript") + [:head [:link (attr= :type "text/css")]] + (do-links my-css-resources "text/css" "screen" "stylesheet") + [:div#topbar :h1#title] (content (mk-link (str (:site-name state)) "/")) + [:div#content :div.post] (do-posts (:entries state)) + ; TODO(jwall): add link to molehill site. + [:div#powered-by] (content "Powered By molehill")) + +(deftemplate my-index-page my-page-resource [state] + [:head :title] (content (str (:site-name state))) + [:head :script] (do-scripts my-js-resources "text/javascript") + [:head [:link (attr= :type "text/css")]] + (do-links my-css-resources "text/css" "screen" "stylesheet") + [:div#topbar :h1#title] (content (mk-link (str (:site-name state)) + "/")) + [:div#content :div.post] (do-posts (:entries state)) + ; TODO(jwall): add link to molehill site. + [:div#powered-by] (content "Powered By molehill")) + +(deftemplate my-feed-page my-feed-resource [state] + [:channel :title] (content (str (:site-name state))) + ; TODO(jwall): should this be in the config? + [:channel :description] (content (str "")) + [:channel :copyright] (content (str "")) + [:channel :language] (content (str "en-us")) + [:channel :lastBuildDate] (content (str "")) + [:channel :webMaster] (content (str "")) + [:channel :generator] (content (str "molehill-0.0.1")) + [:item] (do-feed-items (:entries state)) + ; TODO(jwall): add link to molehill site. + [:div#powered-by] (content "Powered By molehill")) + +(deftemplate my-tag-landing-page my-page-resource [state] + [:head :title] (content (str (:site-name state))) + [:head :script] (do-scripts my-js-resources "text/javascript") + [:head [:link (attr= :type "text/css")]] + (do-links my-css-resources "text/css" "screen" "stylesheet") + [:div#topbar :h1#title] (content (mk-link (str (:site-name state)) + "/")) + [:div#content :div.post] (do-posts (:entries state)) + ; TODO(jwall): add link to molehill site. + [:div#powered-by] (content "Powered By molehill")) + +(deftemplate my-tag-page my-page-resource [state] + [:head :title] (content (str (:site-name state))) + [:head :script] (do-scripts my-js-resources "text/javascript") + [:head [:link (attr= :type "text/css")]] + (do-links my-css-resources "text/css" "screen" "stylesheet") + [:div#topbar :h1#title] (content (mk-link (str (:site-name state)) + "/")) + [:div#content :div.post] (do-tags-content (:entries state)) + ; TODO(jwall): add link to molehill site. + [:div#powered-by] (content "Powered By molehill")) + +(hill-page :index-tmpl index-state + (apply str (my-index-page index-state))) + +(hill-page :feed-tmpl index-state + (apply str (my-feed-page index-state))) + +(hill-page :tag-tmpl index-state + (apply str (my-tag-page index-state))) + +(hill-page :tag-landing-tmpl index-state + (apply str (my-tag-landing-page index-state))) + +(hill-page :entry-tmpl index-state + (apply str (my-entry-page index-state))) + diff --git a/confs/apache.conf b/confs/apache.conf new file mode 100644 index 0000000..937e34b --- /dev/null +++ b/confs/apache.conf @@ -0,0 +1,39 @@ + + ServerAdmin jeremy@marzhillstudios.com + + DocumentRoot "/home/jwall/www/" + ServerName jeremy.marzhillstudios.com + + + DirectoryIndex index.html + Options Indexes FollowSymLinks MultiViews + AllowOverride Options FileInfo AuthConfig Limit + Order allow,deny + Allow from all + + + RewriteEngine on + #RewriteOptions MaxRedirects=2 + + # first resume redirects to personal resume.pdf + RewriteCond %{REQUEST_URI} ^/resume/?$ + RewriteRule ^/resume/?$ /personal/resume.pdf + + # cv gets the same treatement + RewriteCond %{REQUEST_URI} ^/cv/?$ + RewriteRule ^/cv/?$ /personal/resume.pdf + + # now set up the rewrite rules to simulate wordpress links + # first we handle the index page case + RewriteCond %{REQUEST_URI} ^/index.php/?$ + RewriteRule ^/index.php/?$ / [R=302,L] + + # then we handle the tag/category page case + RewriteCond %{REQUEST_URI} ^/index.php/(tag|category)/.* + RewriteRule ^/index.php/(tag|category)/(.*)/?$ /$1/ [R=302,L] + + # then we handle the entry page case + RewriteCond %{REQUEST_URI} ^/index.php/[^/]+/[^/]+/$ + RewriteRule ^/index.php/[^/]+/([^/]+)/?$ /entries/$1/ [R=302,L] + + diff --git a/site.yml b/site.yml new file mode 100644 index 0000000..9afc81c --- /dev/null +++ b/site.yml @@ -0,0 +1,12 @@ +sitename: Marzhill Musings +host: jeremy.marzhillstudios.com +author: Jeremy Wall +dirs: + static: _static + content: _content + output: generated + template: _tmpl +tmpl: + article: page.html + collection: page.html + tagcloud: page.html diff --git a/submit.sh b/submit.sh new file mode 100755 index 0000000..eb62782 --- /dev/null +++ b/submit.sh @@ -0,0 +1,24 @@ +#!/bin/bash +#=============================================================================== +# +# FILE: submit.sh +# +# USAGE: ./submit.sh +# +# DESCRIPTION: Sync the webpage with the remote server to publish to +# +# OPTIONS: --- +# REQUIREMENTS: --- +# BUGS: --- +# NOTES: --- +# AUTHOR: Jeremy Wall (jw), jeremy@marzhillstudios.com +# COMPANY: Marzhillstudios.com +# VERSION: 1.0 +# CREATED: 07/09/2010 21:51:59 CDT +# REVISION: --- +#=============================================================================== + +wd=$(dirname $0) + +rsync --rsh=ssh --delete --checksum --recursive $* \ + $wd/generated/* zaphar.xen.prgmr.com:/home/jwall/www