tag:blogger.com,1999:blog-29712577113629342242024-02-08T10:37:48.546-08:00Command Line MacThe Power of the Prompttekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comBlogger86125tag:blogger.com,1999:blog-2971257711362934224.post-75347324609125887582012-02-25T08:42:00.001-08:002012-02-25T08:42:55.428-08:00Launching Linux FireballSince late 2011, I've found myself doing a lot more Linux administration than Mac. In light of that, I am launching a <a href="http://linuxfireball.blogspot.com/">new Linux blog</a> that will have a lot of overlap with Command Line Mac.
<br><br>
Mostly for my own convenience, I'll be copying a lot of the Unix/Linux only posts over to the new blog. I'll still be posting here when I run across something interesting in the Mac world, but I expect the other blog to be much more active this year.tekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comtag:blogger.com,1999:blog-2971257711362934224.post-71968600106217785512012-02-25T08:37:00.001-08:002012-02-25T08:37:34.665-08:00Adventures with Airport ExpressAfter the failure of my cable TV/Internet service last month, and the slow response of the cable company, I dumped cable in favor of satellite+DSL. It was an inconvenient process to say the least. When it was done, I needed to connect the satellite DVR to the wireless DSL router to enable additional features.
<br><br>
The DVR had a USB wireless adapter, but because of the location of router, the signal was not quite strong enough to make the connection work. I had a spare Airport Express and my first thought was to use it to extend the DSL router network. Before doing any research, I made manual changes to the Airport Express that rendered it a brick. Doh!
<br><br>
I found a <a href="http://support.apple.com/kb/HT3728">helpful article at Apple Support that let me reset it</a> to factory settings. Then, I finally did the research and found that the plain Express can't be used to extend a 2WIRE wireless router.
<br><br>
The next strategy was to set up a separate wireless network and plug the Airport Express into one of the wired ports on the router. The Airport Express provided a stronger signal that the 2WIRE and allowed me to connect the DVR to the Internet. The only modification I had to make from a basic configuration was to turn off network address translation (NAT) on the Airport Express to avoid a double NAT situation since the 2WIRE also provided NAT. The Airport Express actually detected the problem and made the suggestion when I connected to it with the Airport Utility. Apple really has some very nicely thought out software in their hardware.tekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comtag:blogger.com,1999:blog-2971257711362934224.post-28054724300425790682011-11-17T15:01:00.001-08:002011-11-17T15:12:44.150-08:00Ruby: Sorting an array with multiple fieldsI was working on a project today and needed to make some changes to the way an array on objects was sorted before being displayed on the screen. The main reason I even bothered to post this is to heap a little more praise on Ruby for working the way I expected it to work.
<br><br>
Here was my object (not tied to a database, just created for convenience):
<blockquote>class Force<br>
attr_accessor :id<br>
attr_accessor :last_name<br>
attr_accessor :first_name<br>
attr_accessor :rank<br>
attr_accessor :shift<br>
attr_accessor :promoted_on<br>
attr_accessor :phone<br>
attr_accessor :totalcount<br>
end</blockquote>
I needed to sort an array of these objects by rank ascending, shift ascending, then promoted_on (date) descending.
<br><br>
I sorted them by concatenating the fields together and making the right comparisons. This only works because the first two fields, rank and shift, cannot be empty. Otherwise, a simple concatenation would create undesired results.
<br><br>
Sorting all fields ascending looks like this:<br>
<blockquote>@forces = @forces.sort {|x,y| x.shift + x.rank + x.promoted_on.to_s <=> y.shift + y.rank + y.promoted_on.to_s}
</blockquote>
To sort the promoted_on field in descending order, reverse the x,y sides for that field:<br>
<blockquote>@forces = @forces.sort {|x,y| x.shift + x.rank + <b>y.promoted_on.to_s</b> <=> y.shift + y.rank + <b>x.promoted_on.to_s</b>}</blockquote>
No need for a special sort coding block. Ruby FTW!tekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comtag:blogger.com,1999:blog-2971257711362934224.post-46536643524884562342011-10-06T09:56:00.001-07:002011-10-06T09:56:44.527-07:00Steve Jobs RIPSteve Jobs RIPtekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comtag:blogger.com,1999:blog-2971257711362934224.post-63758701625402939452011-09-18T12:55:00.000-07:002011-09-18T18:21:18.498-07:00The Chromium OS experiment<b>Google's Chromium OS</b>
<br><br>
I downloaded a bootable ISO of the Google Chromium OS release candidate. Then, fired it up in VMware Player and took a look around. It appears to be based on OpenSUSE Linux 11.4. SUSE was one of the distributions I cut my teeth on in the Linux world. I always appreciated the 9 pound manual of documentation that came with it. Very helpful for beginners and a good reference to their tools.
<br><br>
I might have made a mistake when setting up the virtual machine because it complained of low memory right after booting. This could also explain the slow performance I experienced. The desktop itself is clean and appears to be optimized to run the Chrome web browser. There was a local word processing application, but I didn't get far enough to test it out, it was just too slow. I could not find a download link at Google, but I found one at a .eu domain. Maybe it was a pirated or tampered version. I am going to postpone further research until I am sure I have a good distribution.
<br><br>
Updated: Turns out Google doesn't provide an image or download of the OS itself, only the source code and compilation instructions. Also, the OS is based on Ubuntu, despite the build I got from Europe based on SUSE. ZD Net suggested that the running OS would be slow so I maybe I am not missing much at this stage of development. I don't want to pay for a Chromebook just to run Chrome. Another reason to keep this on hold for now.
<br><br>
tekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comtag:blogger.com,1999:blog-2971257711362934224.post-83169338575451806582011-07-20T13:42:00.000-07:002011-07-29T13:12:47.288-07:00Getting by in GitGit is the source code management system used for the Linux kernel and many other highly complex projects. It was written by Linux Torvalds after some controversy over the proprietary Bitkeeper program that used to manage the Linux kernel.<br /><br />I've needed to upgrade my skills recently to use git in place of subversion because that is what my shop decided to use. I've moved all my Rails code into a remote git server and so far, so good.<br /><br />One improvement is it has fewer "droppings" than subversion. There is no hidden .svn directory in each directory with source code. Only a single .git directory at the root of the project, plus a .gitignore file for files you don't want git to track.<br /><br /><span style="font-weight:bold;">Initialize project tracking</span><br />git init<br /><br /><span style="font-weight:bold;">Check out an existing project from remote server</span><br />git clone ssh://server/git/project<br /><br /><span style="font-weight:bold;">Add a file for git to track</span><br />git add <file><br /><br /><span style="font-weight:bold;">Add all files from this directory and below for git to track</span><br />git add .<br /><br /><span style="font-weight:bold;">Commit all files to local repository</span><br />git commit -a -m "message"<br /><br /><span style="font-weight:bold;">Undo changes to a file (re-check out from repository)</span><br />git checkout -- <file><br /><br /><span style="font-weight:bold;">Pull files from remote repository and merge with local repository</span><br />git pull<br /><br /><span style="font-weight:bold;">Push files to remote repository (must commit first)</span><br />git push<br /><br /><span style="font-weight:bold;">Move file or directory to new location</span><br />git mv path destination<br /><br /><span style="font-weight:bold;">Remove file or directory from the working tree</span><br />git rm path<br /><br /><span style="font-weight:bold;">To create a remote repository from an existing project takes several steps</span><br />cd /tmp<br />git clone --bare /path/to/project (creates a /tmp/project.git directory)<br />scp project.git to remote server<br />cd /path/to/project<br />git remote add origin ssh://server/git/project<br />git config branch.master.remote origin<br />git config branch.master.merge refs/heads/mastertekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comtag:blogger.com,1999:blog-2971257711362934224.post-82440447494345014052011-05-26T14:10:00.000-07:002011-05-26T14:34:52.035-07:00LVM basicsI've just spent a few hours with iSCSI SAN disks (EqualLogic) and Linux Logical Volume Manager (LVM). The abstraction is even deeper than that, because Linux at work is running under VMware, so it is really VMware talking to the SAN and presenting a SCSI disk to Linux. Since I only get into the LVM weeds a couple of times a year, I thought it would be helpful to list the steps I took to get usable disk space under Linux starting with the raw disk space.<br /><br /><span style="font-weight:bold;">Step One - create a new partition</span><br />Create a new partition with FDISK or PARTED. Mark the partition type hex 8E for LVM. In my case, the SCSI disk appeared as /dev/sdb and the partition using all space became /dev/sdb1. LVM is capable of using a raw device (no partition type), but I stayed in familiar partitioning territory.<br /><br /><span style="font-weight:bold;">Step Two - create LVM physical volume</span><br /><span style="font-style:italic;">pvcreate /dev/sdb1</span><br /><br /><span style="font-weight:bold;">Step Three - create LVM volume group in the physical volume</span><br /><span style="font-style:italic;">vgcreate new_volume_group /dev/sdb1</span><br /><br /><span style="font-weight:bold;">Step Four - create LVM logical volume in the volume group</span><br /><span style="font-style:italic;">lvcreate --name new_logical_volume --size 100G new_volume_group</span><br /><br /><span style="font-weight:bold;">Step Five - create a file system on the logical volume</span><br /><span style="font-style:italic;">mkfs -t ext4 /dev/mapper/new_volume_group-new_logical_volume</span><br />Note: Linux device mapper automatically creates a symlink to the disk in /dev/mapper using the volume group and logical volume names. If you choose more meaningful names than the example, the name won't look so awful.<br /><br /><span style="font-weight:bold;">Step Six - turn off automatic file system checks (optional)</span><br /><span style="font-style:italic;">tune2fs -c 0 /dev/mapper/new_volume_group-new_logical_volume</span><br /><br /><span style="font-weight:bold;">Step Seven - add mount point in /etc/fstab</span><br />Once the mount point is list in fstab, mount it manually and it is ready to use.tekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comtag:blogger.com,1999:blog-2971257711362934224.post-88270338649041637372011-04-13T10:13:00.000-07:002011-04-13T10:26:12.415-07:00Chrome RevisitedIn my first <a href="http://commandlinemac.blogspot.com/2010/08/google-chrome-experiment.html">serious test of Mac Chrome back in August, 2010</a>, it came up short compared to other popular web browsers. But Chrome has been evolving fast. And since Firefox 4.0 was just released, it seemed like the right time to run another informal comparison.<br /><br />What a difference. <a href="http://www.google.com/chrome/intl/en/landing_chrome.html?hl=en">Chrome has become more stable</a>, faster, and has made huge strides in available and useful plugins. The plugin gap with Firefox was a glaring issue last time. As a web developer, Firefox was indispensable with the Firebug plugin, letting you drill into the CSS and JavaScript acting on individual DOM components. Chrome now has native developer tools that rival Firebug. I am also fond of the Awesome Screenshot plugin that allows you to capture and annotate a web page from within the browser.<br /><br />While I have found Firefox 4.0 a nice improvement in looks and rendering speed, I have also found it leaks memory, more so on Windows than Linux or Mac. I can rarely make it through a day without the Windows version locking up. That may be an artifact of the slew of plugins I am running and not the Firefox core.<br /><br />For the last couple of weeks, I have been using Chrome as my primary browser on all platforms and have been very pleased. Competition is a great thing and I am excited to see browsers evolving again after what seemed like a long period of stagnation.tekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comtag:blogger.com,1999:blog-2971257711362934224.post-34142367345834767362011-01-23T09:10:00.000-08:002011-01-23T10:16:46.452-08:00Android G2 and iTunes music syncGetting music from iTunes on a Mac onto an Android G2 smart phone was an easy task. Maybe it wasn't so easy with the first generation Android phones, but the G2 I purchased a couple of weeks ago came with everything I needed.<br /><br /><span style="font-weight:bold;">Connecting the phone to the Mac</span><br /><br />My main Mac is a two year old MacBook. The G2 came with a USB sync cable and as soon as I attached it, it opened iPhoto to upload pictures just like a camera. But the G2 also displayed a screen to enable it as a USB disk drive. I enabled disk mode, then took a look at the mounted volume in Finder.<br /><br />One of the icons on the Volume was DoubleTwist. When I double clicked it, it started downloading the latest Mac version of DoubleTwist. After installation, the DoubleTwist interface looked very much like iTunes. It allows you to automatically sync all music from iTunes or just selections.<br /><br />Since I don't plan to listen to music often on the G2, I chose to only sync playlists. This only syncs the music required in each playlist. There are a lot of options in DoubleTwist and the integration was seamless.tekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comtag:blogger.com,1999:blog-2971257711362934224.post-79450597887084327292011-01-02T20:01:00.000-08:002011-01-03T09:35:53.066-08:00Automating FTPPeople frequently need to automate FTP sessions to upload or download files.<br />Most command line FTP clients, including the FTP client on the Mac, can automatically login to an FTP server by reading the <span style="font-style:italic;">.netrc</span> file in the user home directory. Note that the FTP auto-login file starts with a dot in front of the name (dot netrc).<br /><br /><span style="font-weight:bold;">Syntax of the $HOME/.netrc</span><br /><br />The .netrc file can contain more than one auto-login configuration. Each FTP server has a set of commands, the minimum being the login name and password. You can create as many machine sections as you need. Here is a generic example:<br /><blockquote>machine ftp.server.com<br /> login myuserID<br /> password mypassword</blockquote><br /><br /><span style="font-weight:bold;">Very Important: .netrc permissions!</span><br />Since user IDs and passwords are stored in the .netrc file, the FTP client enforces permission checking on it. It must be set so that no groups and no other users can read or write to it. You can set the permissions on it with this command from the Terminal (from your home directory) once the file is created:<br />chmod 700 .netrc<br /><br /><span style="font-weight:bold;">Adding FTP commands in a BASH script</span><br />You can embed FTP commands in a BASH script to upload and download files.<br />For example, you could create a script file named <span style="font-style:italic;">ftpupload.sh</span>:<br /><blockquote>#!/bin/bash<br /># upload a file <br />/usr/bin/ftp -i ftp.server.com <<ENDOFCOMMANDS<br />cd backupdir<br />cd subdir<br />put datafile<br />quit<br />ENDOFCOMMANDS</blockquote><br /><br />In this example, I added the -i switch when running FTP to prevent it from prompting on multiple file uploads/downloads, even though it is only uploading one file in the example. I also use the BASH <span style="font-style:italic;">HERE document</span> feature to send commands to FTP. When the script is run, it will auto-login using the information in the .netrc file, change to the right remote directory and upload the datafile.<br /><br /><span style="font-weight:bold;">Scheduling the script with Cron</span><br />The last step is to get the BASH script to run unattended, say every day at 5:00 am. The old school UNIX way is to use Cron, but the fancy new Apple way is to use a launchd XML configuration. As long as cron is supported in OS X, I'll stick to the old school way. I leave the launchd configuration as an exercise for the reader.<br /><br />Add these lines with the command "crontab -e", then save:<br /><blockquote># automated FTP upload<br />0 5 * * * /Users/username/ftpupload.sh</blockquote>tekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comtag:blogger.com,1999:blog-2971257711362934224.post-67444860162940959372010-09-15T09:14:00.000-07:002010-09-15T10:07:14.204-07:00Dumping a Postgresql database remotely with SSHI ran into a rare problem recently with a large Postgresql database that was filling up the local disks of a server. The database was large, over 100 GB and about 300 million records. There was a lot of churn and it had not been vacuumed in a long time. When I manually ran a vacuum on it, there was not enough working disk space to complete the operation, creating a bind.<br /><br />What I decided to do instead of using vacuum was to dump it to a remote backup location, then drop the database and restore it from the remote dump. I used SSH to run the remote commands.<br /><br /><span style="font-weight:bold;">Dump a remote Postgresql database to the local machine</span><br />ssh <i>user@remote-database-server</i> 'pg_dump database-name -t table-name' > table-name.sql<br /><br /><span style="font-weight:bold;">Restore a remote Postgresql database dump to the local database server</span><br />ssh <i>user@backup-machine</i> 'cat table-name.sql' | psql -d database-name<br /><br />Note that the dump command is run from the backup machine and the restore command is run from the database server.<br /><br />Also note the single quotes around certain parts of the command.<br /><br>tekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comtag:blogger.com,1999:blog-2971257711362934224.post-10090258759896217562010-08-19T16:26:00.001-07:002010-08-19T16:32:20.823-07:00The Google Chrome experimentWhen Google first announced their Chrome browser, packed with a revamped JavaScript engine (V8) and support for offline web apps, I thought I would give it a spin. The first couple of releases were for Windows and Linux -- no Mac version. Those early versions were a little clunky and appeared to offer no better performance than other popular browsers. So I moved on.<br /><br />When the Mac version became available, it was a much more polished browser. Another theoretical selling point was that each tab ran as a separate process so one crashed tab would not crash the whole browser. I decided to give Chrome a serious work out on my Mac at home.<br /><br />Things started out well enough and performance was good. I perused the help and learned some of the short cuts. After about three good weeks, something went wrong. I don't know if it was an update, a growing cache, or what, but it started slowing down. Then, it started having problems loading pages from web sites that worked fine in other browsers. It is possible that it even caused wireless network issues, though that is just speculation at the moment. I need to do some more research to see if the problems were related to Chrome.<br /><br />For now, I am sticking with Firefox as my main browser on the Mac. I'll come back and try Chrome out after the next major release.tekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comtag:blogger.com,1999:blog-2971257711362934224.post-11495887413919305492010-07-01T16:36:00.001-07:002010-07-01T16:48:56.767-07:00Finding IPs connected to your web server<br><br /><span style="font-weight:bold;">On Mac OS X</span><br />note: this also shows outgoing connections from web browsers<br><br />Get all IPs connected to your web server:<br /><blockquote>netstat -nat | sed -n -e '/ESTABLISHED/p' | awk '{print $5}' | sed 's/\./ /g' | awk '{print $1"."$2"."$3"."$4}' | sort</blockquote><br /><br />Get all <span style="font-style:italic;">unique</span> IPs connected to your web server:<br /><blockquote>netstat -nat | sed -n -e '/ESTABLISHED/p' | awk '{print $5}' | sed 's/\./ /g' | awk '{print $1"."$2"."$3"."$4}' | sort | uniq -c | sort -n</blockquote><br /><br /><span style="font-weight:bold;">On Linux</span><br />Get all IPs connected to your web server:<br /><blockquote>netstat -ntu | sed -e 's/::ffff://g' | awk '{print $5}' | cut -d : -f1 | sort -n</blockquote><br />Get all <span style="font-style:italic;">unique</span> IPs connected to your web server:<br /><blockquote>netstat -ntu | sed -e 's/::ffff://g' | awk '{print $5}' | cut -d : -f1 | sort | uniq -c | sort -n</blockquote>tekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comtag:blogger.com,1999:blog-2971257711362934224.post-112399835662560932010-05-16T14:38:00.001-07:002010-05-16T14:38:27.974-07:00RIP RJD<a href="http://latimesblogs.latimes.com/music_blog/2010/05/ronnie-james-dio-dies-sabbath-rainbow-singer.html">The Last in Line</a>tekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comtag:blogger.com,1999:blog-2971257711362934224.post-67375141989420462622010-03-22T10:39:00.000-07:002010-03-22T10:53:46.967-07:00The mysterious Data Center Technical Specialist certificationOn March 4, I received an email from Novell Technical Training that I had received a Novell certification for "Data Center Technical Specialist". This came as a surprise to me because I had not applied for this certification, taken any tests for this certification, not had I even <span style="font-weight:bold;">heard</span>of this certification.<br /><br />Due to a cross marketing agreement with the Linux Professional Institute, I had applied for and received the Novell Certified Linux Administrator certification a few weeks prior. This seemed legitimate to me, as I had extensive experience with SUSE Linux and my Linux skills are still sharp. I continue to perform Linux server administration as part of my daily work.<br /><br />However, I am not quite sure what the Data Center Technical Specialist is supposed to represent. Confused, I wrote to Novell Training asking what the certification meant. I received this equally mysterious reply:<br /><blockquote>Thank you for contacting Novell Training Services. You have received the certification as part of some changes we have made recently to our partner requirements. As part if these changes, some of the exams/certifications you have now count toward the new certification.</blockquote><br />I searched the official Novell Certification web site, and this certification does not appear anywhere. I suspect, but can't confirm that it may be part of the Solution Provider program. As such, it is probably more of a value to Novell sales than to an individual technician. I remain somewhat baffled.tekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comtag:blogger.com,1999:blog-2971257711362934224.post-34974970897668345452010-03-03T09:59:00.000-08:002010-03-03T13:18:26.541-08:00Fuzzy string matching in PostgreSQLA recent project required me to use fuzzy string matching, or sound alike matching, in an application that searched a list of names. It turns out there is a contrib module for the PostgreSQL database called <a href="http://www.postgresql.org/docs/8.3/static/fuzzystrmatch.html">fuzzystrmatch</a> that provides several different matching algorithms.<br /><br />The task at hand involved rewriting a legacy application, originally in PICK, in Ruby on Rails. The PICK application used a soundex search to find names of people that sounded like the search string.<br /><br />Three algorithms are available as PostgreSQL functions (after installation of the fuzzystrmatch module). They are <span style="font-style:italic;">soundex()</span>, <span style="font-style:italic;">levenshtein()</span>, and <span style="font-style:italic;">metaphone()</span>.<br /><br />Both soundex and metaphone convert a string into character codes. Soundex uses 4 characters and metaphone uses a configurable number of characters. Levenshtein directly compares two strings and returns an integer indicating how well the two strings match.<br /><br />After some trial and error, I found that metaphone produced better results than soundex. I didn't test the Levenshtein function.<br /><br />To improve the results, I added a classic substring search using <span style="font-style:italic;">ILIKE</span>. The combination of ILIKE and metaphone gave me a broad, but reasonably accurate fuzzy string search.tekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comtag:blogger.com,1999:blog-2971257711362934224.post-20741522294634909942010-02-26T13:39:00.000-08:002010-02-26T13:48:49.323-08:00Linux: fuser to find processes on TCP ports<span style="font-weight:bold;">Note: This is for Linux only. The Mac (BSD) version of fuser does not handle TCP/UDP ports.</span><br /><br />Once or twice a year, I run into a problem where a process is using a TCP port and I need to find out which one. I am documenting it here for the next time so I don't have to look it up in man pages.<br /><br />To see all processes, run fuser as root or with sudo.<br /><br />To list all processes connected to TCP port 22:<br /><br /><code>fuser -n tcp 22</code><br /><br>tekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comtag:blogger.com,1999:blog-2971257711362934224.post-60704807494238365482010-01-21T14:39:00.000-08:002011-09-14T11:58:35.285-07:00Rails 2.x scaffolding field types<br><span style="font-weight:bold;">Dude, where's my CRUD?</span><br /><br />One of the powerful features of Rails 1.x was the ability to generate CReate, Update, and Delete (CRUD) admin screens automatically using the scaffolding script built into Rails.<br /><br />The original scaffolding read the database models and created basic, but usable screens to let you add and edit database records. When the 2.x release of Rails came out, scaffolding lost that power. Now, you have to manually specify each table field and type on the command line when running scaffold. If you don't, the generated screens will be empty.<br /><br />Worse, a basic reference to all valid field types was missing. Here are all the valid types I have been able to dig up:<br /><br />string<br />text (long text, up to 64k, often used for text areas)<br />datetime<br />date<br />integer<br />binary<br />boolean<br />float<br />decimal (for financial data)<br />time<br />timestamp<br /><br />A mapping of the scaffolding types to data types in corresponding databases can be found on <a href="http://overooped.com/post/100354794/ruby-script-generate-scaffold-types"><span style="font-weight:bold;">Overooped</span></a>.<br /><br />
Here is an example of using 2.x scaffolding with data types, run from the Rails application root directory:<br /><code><br />ruby script/generate scaffold <span style="font-style:italic;">Modelname</span> name:string title:string employed_on:date remarks:text<br /></code><br /><br>
Here is an example of using rails 3.x scaffolding with data types, run from the Rails application root directory:<br /><code><br />ruby script/rails generate scaffold <span style="font-style:italic;">Modelname</span> name:string title:string employed_on:date remarks:text<br /></code><br /><br>
tekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comtag:blogger.com,1999:blog-2971257711362934224.post-65787118446220981582009-12-24T13:32:00.000-08:002009-12-27T11:56:08.872-08:00You ARE your operating system (among other things)Whenever you make a choice among products with similar functions, that choice spills over into the realm of social status.<br /><br />Cars are a common example where social rank often goes with brand, and even within brand, by model, and even within model, by variations, upgrades, and badges. All signify some social status. I became acutely aware of this after purchasing a car that did not fit my image. It was uncomfortable for everyone.<br /><br /><span style="font-weight:bold;">Your operating system</span><br /><br />You can see the social aspect tied to an operating system by looking at the Apple "I'm a Mac" campaign, and the weak Microsoft "I'm a PC" campaign response. A choice to run Linux or other operating system also carries connotations and shared group identity. In this sense, you <span style="font-weight:bold;">are</span> your operating system.<br /><br /><span style="font-weight:bold;">Your collection of choices</span><br /><br />Whether conscious or not, decisions and choices about purchases you make, where choices are available, weave together part of your social tapestry. The schools you attend, where you work, your clothes, your car, where you live, and yes, your operating system. You are those things, at least socially.<br /><br />The question is, in the context of a consumer society, can a choice be made without the attachment of social status?tekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comtag:blogger.com,1999:blog-2971257711362934224.post-36365820929609199512009-12-05T14:13:00.001-08:002009-12-05T14:23:31.040-08:00Calculating the square root of 2 longhandThere are many numerical methods to calculate square roots. This is the long hand method I learned in junior high. It produces one (accurate) digit at a time, but the working numbers get larger each iteration. Eventually, it bogs down because it is doing math with integers hundreds, then thousands of digits long.<br /><br />Still, it is fun for tinkering. If you run this Ruby script as is, it will calculate the square root of 2 to 1,000 digits. To adjust the precision, change the number of iterations on this line:<br /><br /><code>while iterations < 1000</code><br /><br />Here is the entire script...<br /><pre><br />#!/usr/bin/ruby<br /># calculate a square root of 2 using longhand<br /><br />def newroot(divisor, doubleroot)<br /> # calculate new root<br /> # formula is doubleroot _ * _ = closest to divisor<br /> # try 9, then 8, then 7 ...<br /><br /> i = 9<br /> while i >= 0<br /> multiple1 = (doubleroot.to_s + i.to_s).to_i<br /> multiple2 = i<br /> product = multiple1 * multiple2<br /><br /> if product <= divisor<br /> # found new root<br /> root = i<br /> # get modulus<br /> modulus = divisor - product<br /> break<br /> end<br /> i = i - 1<br /> end<br /><br /><br /> return root, modulus<br />end<br /><br />roots = Array.new<br /><br /># the nearest root to 2 is 1<br /><br />root = 1<br />roots.push(root)<br />remainder = 2 - root<br /><br />doubleroot = root * 2<br />divisor = remainder * 100<br /><br />iterations = 0<br /><br />while iterations < 1000<br /> root,remainder = newroot(divisor, doubleroot)<br /> roots.push(root)<br /><br /> # compute new doubleroot <br /><br /> return root, modulus<br />end<br /><br />roots = Array.new<br /><br /># the nearest root to 2 is 1<br /><br />root = 1<br />roots.push(root)<br />remainder = 2 - root<br /><br />doubleroot = root * 2<br />divisor = remainder * 100<br /><br />iterations = 0<br /><br />while iterations < 1000<br /> root,remainder = newroot(divisor, doubleroot)<br /> roots.push(root)<br /><br /> # compute new doubleroot<br /> doubleroot = roots.to_s.to_i * 2<br /><br /> # compute divisor<br /> divisor = remainder * 100<br /><br /> iterations = iterations + 1<br /><br /> print "iteration " + iterations.to_s + "\n"<br />end<br /><br />print "final roots are " + roots.to_s + "\n"<br /></pre>tekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comtag:blogger.com,1999:blog-2971257711362934224.post-66020536652809479602009-11-10T08:56:00.000-08:002009-11-10T09:37:56.456-08:00Root 2: 500 million glyphsHaving generated a text file with the first billion digits of the square root of 2, I started thinking about how to convert it to text.<br /><br /><span style="font-weight:bold;">First Try</span><br /><br />My first try was an attempt to convert groups of 2 or 3 digits into a character using the ASCII code table. This was not very straight forward for several reasons. To start, many characters in the range from 0-255 are either non-printable or produce symbols or punctuation. To get around this, I decided to only use the part of the table that started with the numbers and went through the end of the lower case letters.<br /><br />That left some 2 digits numbers that had to be scaled up into the useful range and many 3 digit numbers (everything greater than 255) had to be scaled down to the range. I did get a script working that accomplished this, but the results were not satisfying.<br /><br /><span style="font-weight:bold;">Second Try</span><br /><br />I decided to use a simpler solution and convert each pair of digits into a letter using the range 1-26. To do this, I added one to the modulus of 26 and each digit pair and indexed it into the alphabet. Because 26 does not go into 99 evenly, this produces a slight bias against the last few letters of the alphabet but I was will to live with the result.<br /><br />As an example, the first two digits are 14. 14 divided by 26 is 0 with a modulus (remainder) of 14. Adding one to 14 is 15 returning the letter O. The next two digits are 14 also returning O. The next two are 21 returning the letter V.<br /><br />At the end of another scriptaculous ruby adventure, I had converted my billion digit file into 500 million letters.<br /><br /><span style="font-weight:bold;">Signals in the noise</span><br /><br />To make the file easy to search, I wrote the resulting text file out in 72 character lines. Yes, this causes some loss of continuity at line breaks, but again, I was willing to live with it.<br /><br />I quickly found the string with my name "KEITH" 65 times. The first occurrence was 7,398,524 digits into the square root of 2. My wife's name appeared 125 times.<br /><br />I found "GOD" 33,663 times with the first appearance at 3,186 digits.<br /><br />As expected, the smaller the text string, the more likely it is to be found. Most words and strings up to 6 characters can be found in the first billion digits of the square root of 2. Strings 7 characters and longer, like "CALIFORNIA" are often not found.<br /><br />For fun, I ran a few searches for dates and numbers in the integer file, finding things like my birthday and dates of historic events. Numeric strings are much easier to find.<br /><br />Next, I'll post one of my ruby scripts to calculate the square root of 2 digit by digit. It is very slow and inefficient, but fun for tinkering. I have a few more ideas for grappling with the root of 2.tekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comtag:blogger.com,1999:blog-2971257711362934224.post-61061714815637941112009-10-23T12:03:00.000-07:002009-10-23T12:54:08.492-07:00Root 2: the shallow end of infinityA couple of months ago, I got the semi-psychotic idea to search for numbers and strings in the digits of the square root of 2. The idea is hardly unique. There has been a fair amount of research into this area, a variation on the <a href="http://en.wikipedia.org/wiki/Infinite_monkey_theorem">infinite monkeys typing on a typewriter theorem</a>.<br /><br /><span style="font-weight:bold;">Why Root 2?</span><br /><br />Because it wasn't Pi. Pi is the probably the most studied math constant, so I wanted to start with something fundamental that wasn't pi.<br /><br />I needed an irrational constant, one that generated an infinite stream of random numbers, and root 2 was as good a choice as any. I might argue that root 2 is even more commonly encountered than pi, in that it is found in the hypotenuse of a 1x1 square. Surely, 1x1 squares are more common than circles in the modern world.<br /><br /><span style="font-weight:bold;">Infinite Integers</span><br /><br />The first challenge was to find or generate the integers of root 2. With a quick search, I found up to 10 million digit files that had been created by NASA. While a good start, I wanted a lot more than 10 million digits. I ended up using the 1 million file from NASA as a control file to make sure that any other method I used to generate digits was validated by NASA. My starting target was 1 billion digits, but I wanted the ability to go to an arbitrary number.<br /><br />When I started thinking about how to generate digits with a computer, I realized there were some unique hurdles in trying to calculate infinite precision numbers. First, standard machine data types fail at the task. 64-bit integers give you a large work space, but floating point numbers are notorious about losing precision. The math libraries of most languages work to about 15 digits, so those were out.<br /><br />I found more than <a href="http://en.wikipedia.org/wiki/Methods_of_computing_square_roots">10 methods documented to compute square roots</a>.<br /><br /><span style="font-weight:bold;">Baby Steps</span><br /><br />The first attempt to calculate the square root of 2 was actually written by a colleague who was intrigued enough to implement the Babylonian method in clojure. The first few tests seemed promising and matched the NASA file up to roughly 25,000 digits. Where it broke down was after about 20 iterations where the run time exceeded 18 hours. Getting to a very large number of digits did not appear to be practical.<br /><br />The next attempt was one I wrote using the Duplex method (in ruby). Duplex has the advantage of computing one digit at a time and I was hoping this would avoid some of the problems with doing multiplication with millions of digits. There may have been a flaw in my ruby implementation because it got off track past digit 53 when I compared the results with the NASA file. I was careful to follow the example I found in Wikipedia, but I must have made some kind of mistake because it always diverged.<br /><br /><span style="font-weight:bold;">Standing on Shoulders</span><br /><br />Like any good programmer, I quickly decided to search for working code that someone else had written. I found a very old C++ implementation, but also a handful of working programs. The one I tried first was <a href="http://numbers.computation.free.fr/Constants/PiProgram/pifast.html">PiFast</a>.<br /><br />PiFast is a Windows program that can calculate many constants in addition to Pi. Since my main computer is a Macbook, I loaded it into my VMware virtual machine running XP. As you might imagine, it was more than a little CPU intensive. After a few short trial runs, I let it loose on a 1 billion digit computation that for all intents locked my machine for about 11 hours. At the end, I had a file of 1 billion digits, the first million of which validated perfectly against the NASA file.<br /><br />To validate and test the file -- a 1 gigabyte file -- I wrote a number of scripts (ruby is my scripting language of choice these days) to test the digits, and manipulate the file in several ways.<br /><br />In my next post, I'll talk about how I converted digits to letters and what my early searches turned up.tekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comtag:blogger.com,1999:blog-2971257711362934224.post-8174969310738997872009-09-25T12:17:00.000-07:002009-09-25T12:26:27.752-07:00Introduction to AutoFS in Mac OS X<span style="font-style:italic;">Note: I originally published this article on</span> <a href="http://lowendmac.com">Low End Mac</a><br /><br />OS X uses an AutoFS code stack based on Sun's Solaris version of Unix. Many of the advanced features are not documented very well, and this can be an issue unless you are familiar with Solaris. I was not and had to do quite a bit of digging.<br /><br />AutoFS is often used in enterprise environments to set up network-based home directories and other network mounts for users at login. It can also dynamically mount network shares on access.<br /><br /><span style="font-weight:bold;">OS X auto_master and auto_home</span><br /><br />The /etc/auto_master file controls the auto-mounted Network File System (NFS) file systems. If you are going to mount NFS volumes from a Linux server, there is one gotcha that I covered in an <a href="http://commandlinemac.blogspot.com/2009/06/playing-nice-with-linux-nfs.html">earlier blog post</a>.<br /><br />The auto_master defines all "maps" which are collections of automounts related by mount point and organized in one file (or directory service entry). Here is what the default file looks like on my Mac:<br /><pre>#<br /># Automounter master map<br />#<br />+auto_master # Use directory service<br />/net -hosts -nobrowse,nosuid<br />/home auto_home -nobrowse<br />/Network/Servers -fstab<br />/- -static</pre><br /><br />The plus (+) sign in front of the auto_master entry tells OS X to look in the directory service (Open Directory, LDAP, etc.) for an automount record and use it if found.<br /><br />Notice the /home entry is set to auto_home, and because it is not a full path, it is assumed to be /etc/auto_home. It is an example of an indirect map. The mount point in the local directory is defined, but the remote mounts are defined in the /etc/auto_home map file. Network users who login to the local machine will have their home directories mounted in /home according to the details in /etc/auto_home.<br /><br />Here is the default /etc/auto_home file:<br /><pre>#<br /># Automounter map for /home<br />#<br />+auto_home # Use directory service</pre><br /><br />Once again, we see the plus sign telling OS X to look for an auto_home record in the directory service. No further details are defined.<br /><br />The last two lines in auto_master handle NFS mounts defined in the /etc/fstab file, the common file system mount table in Linux and other Unix flavors. The /etc/fstab file is deprecated in OS X and not recommended.<br /><br /><span style="font-weight:bold;">Applying changes to autofs</span><br /><br />The automount process will not detect changes made to auto_master or other map files unless you tell it. This command tells the process to read all map files again:<br /><br />sudo automount -vc<br /><br /><span style="font-weight:bold;">AutoFS wildcards</span><br /><br />Wildcards can be used in mount map files to allow directory substitution. For example, if you had this defined in auto_master:<br /><br />/opt auto_public<br /><br />And this defined in /etc/auto_public:<br /><pre><br />* nfs.mydomain.com:/public/&</pre><br /><br />Then, when /opt/bin was accessed, nfs.mydomain.com:/public/bin would be mounted on /opt/bin. The same would apply for any subdirectory accessed under /opt.<br /><br /><span style="font-weight:bold;">Other Map Types</span><br /><br />OS X AutoFS supports <span style="font-style:italic;">direct maps</span>, where the local mount points are defined inside the mount map file, and <span style="font-style:italic;">indirect maps</span>, where the local mount point is defined in auto_master. The wildcard example above is an indirect map. There are also <span style="font-style:italic;">executable maps</span> where the mount map file is actually an executable shell script that returns the names of the mount points within the trigger folder. Exploring executable maps is left as an exercise for the reader. Finally, you can define static maps in /etc/fstab or in the Directory Utility Mounts tab.<br /><br /><span style="font-weight:bold;">Other file system types</span><br /><br />All of the examples shown use the NFS file system. OS X autofs can also handle Apple File System (AFP) and Microsoft Server Message Block (SMB) file systems.<br /><br />To use these file systems, add the -fstype=afp and -fstype=smbfs options when defining the remote mount points. (Note: You cannot use smbfs for remote home directories unless you are using the Microsoft Active Directory service plugin.)tekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comtag:blogger.com,1999:blog-2971257711362934224.post-56890995439608675352009-06-27T09:10:00.000-07:002009-06-27T11:50:44.649-07:00Playing nice with Linux NFSI naively assumed that mounting an NFS volume exported by Linux would be uneventful, and it should be. My initial attempt to manually perform an NFS mount failed without any client side error message.<br /><br />Checking system logs on the Mac revealed nothing.<br /><br />The Linux NFS server (Red Hat Enterprise 5) complained with this warning:<br /><code><br />nfsd: request from insecure port (192.168.7.130:49232)!</code><br /><br />After some Internet sleuthing, I found Mac client side NFS tries to mount NFS volumes over high TCP ports (>1024). You must explicitly tell the Linux NFS server to accept mount requests on high ports by adding the "<b>insecure</b>" option to /etc/exports. For example,<br /><code><br />/nfstest 192.168.206.0/24(rw,async,insecure)<br /></code><br />Then, NFS mounts from OS X should work as expected.<br /><br>tekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.comtag:blogger.com,1999:blog-2971257711362934224.post-42950886992098287862009-04-12T09:54:00.000-07:002009-04-12T10:02:46.467-07:00Automating FTP on the MacThere is no shortage of GUI FTP programs, but kicking it old school on <br />the command line allows you to easily automate uploads and downloads. <br />The best part is, there is nothing to install. Everything you need <br />waits patiently behind the warm glow of a Terminal session.<br /><br /><b>The Mac command line FTP program</b><br />The default command line FTP program in OS X 10.5 resides at:<br /><code>/usr/bin/ftp</code><br /><br />By all outward appearances and behavior, the Mac FTP program is<br />the standard BSD version. The man page is the standard BSD <br />page and contains a wealth of useful information.<br />A typical command line FTP session is interactive and goes something<br />like this:<br /><ul><br /><li>login to an FTP server</li><br /><li>issue commands (ls (list), get (download), put (upload))</li><br /><li>quit</li></ul><br /><br />If you have a repetitive FTP task, the fun quickly fades into a<br />mind numbing exercise. This is where FTP automation<br />shines.<br /><br /><b>The magical .netrc file</b><br />What makes FTP automation possible is a magical, little known file <br />called <i>.netrc</i>. The .netrc file is a plain text file that is <br />hidden (the file name starts with a period) and lives in the root of <br />your home directory. The .netrc file allows FTP to perform automatic <br />logins to FTP servers based on the name.<br /><br />The .netrc is not created by default. You have to create it manually. <br />To create an empty .netrc file, open a Terminal and use the following <br />commands:<br /><code><br />touch .netrc<br />chmod 700 .netrc<br /></code><br /><br />It is <b>critical</b> that you issue the <code>chmod</code> command to <br />set the permissions so that only the owner of the file can view it. If <br />the permissions are not set correctly, the FTP client will assume it has <br />been compromised and will refuse to use it.<br /><br />Inside the .netrc, you define a block of settings for each FTP<br />server you use, including the machine name, the login ID and the<br />password. Here is what a typical block for a mythical FTP server:<br /><pre><br />machine myftpserver.com<br /> login myuser<br /> password mypassword</pre><br /><br />There are additional settings that can be included. Check the FTP man <br />page for more. You can test your settings by typing "ftp <br />myftpserver.com" at a Terminal prompt and it should automatically login. <br />Note that you can store multiple FTP server logins in the .netrc file.<br /><br /><b>Sending FTP commands from a BASH shell script</b><br />Once logins are automated, the final piece of the puzzle <br />is to script a set of FTP commands. The following example uses an<br />advanced BASH shell scripting technique called a "here" document<br />to group the FTP commands to be sent to the server.<br /><pre><br />#!/bin/bash<br />/usr/bin/ftp -d myftpserver.com << ftpEOF<br /> prompt<br /> put "*.html"<br /> quit<br />ftpEOF</pre><br /><br />The FTP command is issued with the -d flag (debug mode) to make it more <br />verbose. That makes any kind of error more obvious. The connection is <br />made to myftpserver.com using the ID and password from the .netrc file. <br />Once the connection is made, the rest of the commands are issued one at <br />a time until the end of the "here" document at the second "ftpEOF". Note <br />that any valid FTP commands can be sent. In the example, the <br /><code>prompt</code> command tells FTP not to prompt for multiple file <br />operations, then the <code>put</code> uploads all files with an html <br />extension. If you want to go the extra mile, you can extend the shell <br />script and do things like reconnect to the FTP server to verify the file <br />sizes of your uploads.<br /><br />While there are several ways you can automate FTP, the nice thing about <br />this method is that it is portable to Linux or any other Unix system.tekewinhttp://www.blogger.com/profile/10230830520110635922noreply@blogger.com