### VU meter

Tuesday, January 10th, 2012My amazing 17 year old brother made this vu meter from scratch. I'm ten years older and actually have an EE degree (who knew?) and could never do anything like this. Go Alex!

an exploration of something

My amazing 17 year old brother made this vu meter from scratch. I'm ten years older and actually have an EE degree (who knew?) and could never do anything like this. Go Alex!

Don't you love how there have been tons of recent papers and commentaries on opening up scientific publishing to make it freely accessible, but they all end up in Nature or Science rather than going to PLoS to, you know, practice what they preach? Impact factor is the usual excuse, but there's plenty of criticism for it and it's a nice positive feedback loop. Given the fact that publishing companies aren't necessary to find peer reviewers anymore (as they were, say 15 years ago), and electronic distribution is cheaper and faster than paper, their only value is based upon their current, unstable position in the publishing process. I'd wager that things will change dramatically in the next five years.

Sometimes lots of little pieces of knowledge just click. I was doing my Cell Mechanics homework this morning and reading about dynamic instability in microtubules. Microtubules are polymer tubes assembled out of lots of individual proteins, called subunits, which form the backbone of biological cells and are required for cell division amongst other things). It mentioned the drug taxol which binds to the tubulin subunits and prevents microtubules from breaking down. This interrupts cell division which is why taxol is used to treat cancer.

I mentioned this to Bex who coincidentally had read an article in the NYTimes recently about the discovery of taxol in the forests of the Pacific Northwest. You should check it out here.

Where it gets really interesting is that my research is on how neurons translate mechanical force into electrical signals, which can be processed by your brain. One of the motivations for the work is that people on chemotherapy often lose touch sensation in their extremities. I hadn't ever thought about it in depth before, but then the light bulb went off.

Special neurons in your body have proteins on their surface that allow sodium (like in table salt) to pass into and out of the cell whenever a force is applied to them. The movement of the sodium ions leads to an electrical signal being transmitted to your brain, alerting it of the force. These proteins (called ion channels) attach to microtubules inside of the cell. So when taxol modifies the network of the microtubules in your cells it either stop the ion channel from binding to it (through another tether protein) or the mechanical properties of the network change, altering the sensitivity of mechanotransduction for example. Awesome.

The most valuable thing I got out of Google this summer was getting in the habit of writing down what I'm doing on a regular basis. At the end of each day (or each week), writing down what you've done and what you're going to do next makes taking the next step that much easier. About a year ago I tried keeping monthly snippets after reading some motivational piece in Science, but a month is just long enough to get you out of the groove and to lose track of things. Writing more frequently takes the emphasis away from big, grand thoughts and turns it more into a source of reference material.

With that in mind, I decided to setup a system for myself tonight. The first choice was desktop, web, or paper. I chose web because it's searchable, is easy to backup and is available from anywhere. First I checked out Backpack, but decided that paying for something when you can do just as good for free was silly.

With it narrowed down to free, open-source CMS systems I decided to go with a blogging setup because a daily post, single user format works better that way than with a wiki. The two main contenders then were WordPress and MovableType. I've used WordPress in the past so I installed it and tried it out for a few minutes until I read about a sweet iPhone interface for MovableType. The upshot of WordPress is that Dreamhost has a one-click install for it, but the slick iPhone setup could not be contained. Installing MovableType took just a few minutes, and a little .htaccess magic made it nicely hidden away. We'll see how it goes.

I saw a great tech talk at work today about Science Commons (by the way, I'm at Google over the summer coding it up). James Boyle from Duke talked about the current state of scientific research and a few ideas to improve upon the state of things.

The first thing that he talked about was a semantic search engine designed to pull together complex ideas from papers. Today, if I'm interested in Alzheimer's and want to identify possible drug targets for treatment or model a cell signaling pathway, the first thing that I'll need to do is start with the literature to find out what's been done. Sounds reasonable, right? It is, except for the hundred thousand papers related to a process like apoptosis (cell death) that I'll be digging through until I need to find a cure for myself. The problem is compounded by the fact that useful information is spread across so many different specialized disciplines that I can't just ask my Alzheimer's researcher buddy to break things down. Review papers can help this somewhat, but you won't necessarily find unexplored connections by reading them and someone has to know enough to write them. Their solution was to make a search engine that could find actual connections, identifying the relationships between cause and effect in the text. He showed some interesting results but you could tell that it has a very long way to go.

He talked about a few other things too, but what I found most interesting was the problem of sharing materials between labs. Being 'scooped' is a problem that is on most people's minds, and it's definitely the most important problem limiting collaboration between labs. If I have a cell line that I think that I can get two good papers out of, there is a lot of motivation to hold off on sharing it with other labs until I have finished my work on it. That's also besides the work required in preparing something to share. Their solution was to setup a formalized sharing system, so that in addition to things like publications and citations being considered in evaluations, the fact that lab X shared 800 embryos with other labs would also be a big plus on the CV.

I've thought a lot about this problem of giving people the motivation to share and haven't come up with a great solution, so it was especially interesting to hear James speak. If there is a good personal relationship where everyone will receive due credit it's easy, but how can someone with a great idea in Japan coordinate with a fabrication wizard at Cornell when they don't even know each other yet? There must be a way to provide the incentive to sharing ideas and materials, anything from mask layouts to microfluidic devices to cell lines and to ensure their quality. Once that's accomplished and you can put together materials from many different people in a black box fashion then output will really shoot off of the charts. And then that perfect search engine will really be important...

The final thing that he touched on was the current debate over public access to journal articles that were funded by the federal government. The huge journal subscription fees are mainly justified by their managing of the peer review and editorial process. But when you think about it, why does it really take that much work? If an online system was setup that all grant awardees were required to enroll in with their specialties, it would be incredibly easy and probably a lot less biased to distribute articles for review that way. And then there's even the option of open pre-publication review which Nature has flirted with lately, but that raises the specter of scooping again.

If you aren't familiar with it (or you've never used flickr), you should check out Creative Commons if you have a second. Also, the actual tech talk will hopefully be online in a few days.

Just ran across an interesting blog, Just Chromatography, that takes the cake for specialization. It's super interesting and super nerdy but made it to slashdot so it's kind've mainstream. Right?

When working with experimental data you'll often run into damped oscillations. Most of the time you can eyeball the data and see that the disturbance is gone before it affects what you are really interested in. Occasionally, though, you'll need to actually calculate the damping coefficient and determine whether the system is overdamped or not.

The first time that I ran into this problem was in an automatic controls class where I needed to calculate the damping coefficient for an elastic band in order to use it elsewhere in our equations. At that time I only needed to calculate the coefficients for a few different configurations so I did the analysis in Excel. Essentially what you do is use the solver plug-in to minimize the sum of the error squared between your theoretical data and the experimental data at the peaks of the wave. It works pretty well and here's an [ example] file to see what I mean.

The next time I ran into this problem I was working on a camshaft lab where we were analyzing the valve spring behavior for a range of springs, shims and engine speeds. We were generating a crap load of data, something like a few million data points in all. Using Excel wasn't very practical in this case, so I wrote a nice automated Matlab script to do it for me.

We're assuming that the amplitude envelope is described by a decaying exponential function. The problem is that y=a*exp(sigma*x) isn't particularly linear. To get around this problem we take the log of both sides to get log(y) = log(a) + sigma*x. On a quick side note, log means the natural logarithm (ln) and b will be negative because we are dealing with decay.

Now that we have a linear equation we can easily do a least-squares error fit in order to calculate the constants a and sigma. We'll optimize the equation Y = A + BX where Y = log(y), A = log(a), B = sigma and X = x. After some linear algebra you can show that p = (A'*A)^-1*A*Y where A = [1 X1; 1 X2 ... 1 Xn], Y = [Y1; Y2 ... Yn] and p = [a;sigma]. In Matlab you can just use p = A\Y. You can calculate a and b from A and B and then plot away. But what is the physical meaning of sigma and what does it tell you about the damping for the oscillator?

The idea is that sigma = chi*omega_n where chi is the damping coefficient and omega_n is the natural frequency. The damped natural frequency is equal to sqrt(omega_n^2-sigma^2). There are three cases then:

- Underdamped: omega_n^2 - sigma^2 > 0
- Critically damped: omega_n^2 - sigma^2 = 0
- Overdamped: omega_n^2 - sigma^2 < 0

Here's some example code that generates sample data and calculates the damping parameters.

clf clear all % This is just example data xData = linspace(0,20*pi,1000); yData = cos(2*xData).*exp(-.05*xData); % If we are only interested in a certain data range rangeLow = 0; rangeHigh = 10000; xCoords = []; yCoords = []; % Find the x,y coordinates for the oscillation peaks yData = yData(:); upOrDown = sign(diff(yData)); maxFlags = [upOrDown(1)<0 ; diff(upOrDown)<0 ; upOrDown(end)>0]; maxIndices = find(maxFlags); for ii = 1:length(maxIndices) if (maxIndices(ii) > rangeLow) && (maxIndices(ii) < rangeHigh) xCoords = [xCoords xData(maxIndices(ii))]; yCoords = [yCoords yData(maxIndices(ii))]; end end % Calculate the optimal values of a and b A = ones(length(xCoords),2); Y = ones(length(xCoords),1); % We take the log of the actual data (yCoords) for ii = 1:length(xCoords) A(ii,2) = xCoords(ii); Y(ii,1) = log(yCoords(ii)); end % After finding A and B, we know that a = exp(A) and b = B p = A\Y; a = exp(p(1)); b = p(2); % We plug them into their function and plot away. Note that b < 0 curvePlot = a*exp(b*xData); figure(1) plot(xData,yData,'-b',xCoords,yCoords,'or',xData,curvePlot,'--g'); legend('Data','Points','Curve');

The problem with this code, however, is that if your data has noise in it then it will have trouble finding the actual oscillator peaks. Currently the code considers a peak to a point where the first derivative is positive before and negative after a point. We can quickly improve on this code by incorporating a tolerance band and iterating through the data to find maxima. The tolerance band specifies how far on either side of a point to check in order to verify that the current point is the maximum value in that range. Iterating is a bit slower than the 'find' approach, but it'll do.

The following code does a lot more than the previous code. It was used in analyzing the damping of valve spring oscillations at high engine speeds, so it:

- Loads the spring force data for a specified engine speed. Three springs and three shim sizes were studied.
- Shifts the force data so that the average force during oscillations is zero. This is necessary accurately report the magnitude of oscillation, which is handy for diagnosing dynamic coil binding.
- Calculates the damping factor.
- Calculates the theoretical natural frequency from known masses and spring constants.
- Calculates the theoretical damped natural frequency, which can later be compared to a fast Fourier transform (FFT) of the system to verify the numbers.

clf clear all engineSpeed = 4800; temp = load([num2str(engineSpeed) '_Ford_Damped_NoShim.m']); AllData(:,1) = temp(:,2); temp = load([num2str(engineSpeed) '_Ford_Damped_Shim034.m']); AllData(:,2) = temp(:,2); temp = load([num2str(engineSpeed) '_Ford_Damped_Shim064.m']); AllData(:,3) = temp(:,2); temp = load([num2str(engineSpeed) '_CASS_Damped_NoShim.m']); AllData(:,4) = temp(:,2); temp = load([num2str(engineSpeed) '_CASS_Damped_Shim034.m']); AllData(:,5) = temp(:,2); temp = load([num2str(engineSpeed) '_CASS_Damped_Shim064.m']); AllData(:,6) = temp(:,2); temp = load([num2str(engineSpeed) '_CADS_NoShim.m']); AllData(:,7) = temp(:,2); temp = load([num2str(engineSpeed) '_CADS_Shim034.m']); AllData(:,8) = temp(:,2); temp = load([num2str(engineSpeed) '_CADS_Shim064.m']); AllData(:,9) = temp(:,2); for ii = 1:9 xData = linspace(0,720,2048)/engineSpeed/360*60; yData = AllData(:,ii); yData = yData - mean(yData(100:500)); % If we are only interested in a certain data range rangeLow = 101; rangeHigh = 1200; checkRange = 100; maxIndices = []; for jj = 1:length(yData) if (jj > rangeLow) & (jj < rangeHigh) for kk = -checkRange:checkRange if (yData(jj) >= yData(jj+kk)) if kk == checkRange maxIndices = [maxIndices jj]; end else break end end end end xCoords = xData(maxIndices); yCoords = yData(maxIndices); % Calculate the optimal values of a and b A = ones(length(xCoords),2); Y = ones(length(xCoords),1); % We take the log of the actual data (yCoords) for jj = 1:length(xCoords) A(jj,2) = xCoords(jj); Y(jj,1) = log(yCoords(jj)); end % After finding A and B, we know that a = exp(A) and b = B p = A\Y; a(ii) = exp(p(1)); sigma(ii) = p(2); % We plug them into their function and plot away. Note that b < 0 curvePlot = a(ii)*exp(sigma(ii)*xData); figure(ii) plot(xData,yData,'-b',xCoords,yCoords,'or',xData,curvePlot,'--g'); title('CADS (4800 RPM)','FontSize',20); xlabel('Time (s)','FontSize',20); ylabel('Spring Force (N)','FontSize',20); legend('Experimental Data','Analyzed Points','Fitted Curve'); pause print -r300 -djpeg80 SpringOscillations end % Calculate the natural frequencies: mretainer = 28e-3; %kg mkeepers = 4.3e-3; mrocker = 116.5e-3; mvalve = 96.4e-3; msys = mretainer + mkeepers + mrocker + mvalve; mspring(1) = 61.6e-3; % Ford mspring(2) = mspring(1); mspring(3) = mspring(1); mspring(4) = 75.4e-3; % CASS mspring(5) = mspring(4); mspring(6) = mspring(4); mspring(7) = 70.2e-3; % CADS mspring(8) = mspring(7); mspring(9) = mspring(7); meff = mspring*.33 + msys; k(1) = 28.3e3; % Damped Ford (N/m) k(2) = k(1); k(3) = k(1); k(4) = 50.4e3; % Damped CASS k(5) = k(4); k(6) = k(4); k(7) = 31.4e3; % Damped CADS k(8) = k(7); k(9) = k(7); w_natural = sqrt(k./meff) w_damped = sqrt(w_natural.^2-sigma.^2)

Note that I wrote all of this code way back in 2005 or 2006, and since then have become a much better Matlab coder. So take the code in this post with a grain of salt.

**Note:** This post is slightly outdated. Since I wrote this post back in 2004, Comsol integrated parameter looping to their solver (it was called Femlab back then). Many use cases that would have previously required Matlab integration can be handled entirely within Comsol now. The particular case that is shown could not (because the permittivity is calculated from a helper function). Unfortunately I do not have the time to answer specific questions or basic FEA issues, but hope that this post gets you pointed in the right direction for simulation automation.

Comsol is extremely powerful because it's well integrated with Matlab. However, the ability to build and run simulations in Matlab isn't well documented.

Here are a few cases where you would want to build a simulation in Matlab:

- You want to simulate a variety of geometries to optimize a system
- You are generating tons of data and you would like to save the results for later analysis and plot generation
- Your simulation is so complicated that the file is becoming tens or hundreds of megabytes in size

Create a new simulation in Comsol or open an existing one. Now, choose Save As... from the File menu. Before you do anything else, look at the options that you have for saving it. One of them is to save it as a .m file. Do that and then open up the existing file in a text editor. You'll see something like this:

flclear fem clear vrsn vrsn.name = 'FEMLAB 3.1'; vrsn.ext = ''; vrsn.major = 0; vrsn.build = 157; vrsn.rcs = '$Name: $'; vrsn.date = '$Date: 2004/11/12 07:39:54 $'; fem.version = vrsn; ...

Take a few minutes to dig through the file and guess what it's doing. It's important to realize that this is a normal .m file so you have access to all of Matlab's libraries and commands.

So how do you run these things? The next time that you start up Comsol be sure to use the Comsol with Matlab option and then just open the file in Matlab like you would any other script. This last step gave me plenty of grief and it's surprising that Comsol doesn't more clearly communicate the massive power that you gain by working directly with the text file (edit: at least in 2004 when I was working on this, I've heard rumors that things are a lot better documented now).

My usual procedure for using this feature is create a new simulation in Comsol using the GUI with the exact options, geometry, boundary conditions, etc. that I want. Once it is done, run the simulation and then save it as a .m file. This approach is useful because Comsol will generate all of the hard part for you and you can just go back and add your for loops, etc.

Here's an example file that I used for simulating the local electromagnetic field enhancement for a gold nanocrescent at UC Berkeley.

% Joseph Doll % OpticalOneLayer.m % FEMLAB 3.1 simulation to calculate the maximum local electric field % enhancement for a gold or silver nanocrescent of arbitrary ID, OD, % and opening width immersed in a low absorption dielectric medium of % arbitrary dielectric constant. function [simResults, lambda] = NanocrescentSim(OD, IDRatio, ... DRatio, lambda, eps_o) flclear fem clear vrsn vrsn.name = 'FEMLAB 3.1'; vrsn.ext = ''; vrsn.major = 0; vrsn.build = 157; vrsn.rcs = '$Name: $'; vrsn.date = '$Date: 2004/11/12 07:39:54 $'; fem.version = vrsn; % ------------------------------------------------------------- % -------------- Start Geometry Initialization ---------------- % ------------------------------------------------------------- % Fix outside diameter % Specify inside diameter as a proportion of OD % Specify delta as a proportion of ID indexCounter = 1; theta = 0; for outsideRadius = OD/2 for insideRadius = outsideRadius.*IDRatio for delta = (insideRadius*2).*DRatio sprintf('OD = %d, ID = %d, OW = %d', ... 2*outsideRadius, 2*insideRadius, delta) % Calculate the horizontal displacement of each ellipse based % upon the dimensions noted above (radii and opening widths) outerCircleDx = 0; outerCircleDy = 0; innerCircleDx = -((outsideRadius - insideRadius) + ... (insideRadius*(1-cos(asin(delta/(2*insideRadius))))) - ... (outsideRadius*(1-cos(asin(delta/(2*outsideRadius)))))); innerCircleDy = 0; % Define the ellipses based upon the calculated dimensions g1=ellip2(outsideRadius,outsideRadius,'base','center','pos', ... [outerCircleDx,outerCircleDy],'rot','0'); g5=ellip2(insideRadius,insideRadius,'base','center','pos', ... [innerCircleDx,innerCircleDy],'rot','0'); % Define the environment bound rectangle bound_rect = rect2('4e-6','2e-6','base','center', ... 'pos',{'0','0'},'rot','0'); % Boolean operations to create the "sharp structures" sharp_nc = geomcomp({g1,g5},'ns',{'g1','g5'},'sf', ... 'g1-g5','edge','none'); % Fillets round_nc = fillet(sharp_nc,'radii',2E-9,'point',[1 2]); % Rotate the finished nanocresent nanocrescent = rotate(round_nc, theta ,[0,0]); clear s s.objs = {nanocrescent, bound_rect}; s.name={'NC','BR'}; s.tags={'nanocrescent','boundrect'}; fem.draw=struct('s',s); fem.geom=geomcsg(fem); % --------------------------------------------------------- % ---------------- Start Simulation Loop ------------------ % --------------------------------------------------------- % Vector of the max electric field at each wavelength emax = []; % Calculate the dielectric constant values % Requires helper function OpticalData.m eps_i = OpticalData( lambda(1), ... lambda(length(lambda)),length(lambda),1); for ii = 1:length(lambda) fem.const={'eps_i',eps_i(ii),'eps_o',eps_o}; clear appl appl.mode.class = 'InPlaneWaves'; appl.module = 'CEM'; appl.assignsuffix = '_wh'; % Solve via input wavelength clear prop prop.field='TM'; prop.inputvar='lambda'; appl.prop = prop; % Set the boundary conditions clear bnd bnd.H0 = {{0;0;1},{0;0;0},{0;0;0}}; bnd.type = {'NR','cont','NR'}; bnd.ind = [1,3,3,3,2,2,2,2,2,2,2,2,2,2]; appl.bnd = bnd; % Set the domain variables clear equ equ.epsilonr = {'eps_o','eps_i'}; equ.ind = [1,2]; appl.equ = equ; % Set the current wavelength appl.var = {'nu','1e9','lambda0',lambda(ii)}; fem.appl{1} = appl; fem.border = 1; fem=multiphysics(fem); % Mesh initialize parameters, smaller = finer fem.mesh=meshinit(fem, ... 'hmaxfact',1, ... 'hcurve',0.1, ... 'hgrad',1.2, ... 'hcutoff',0.001); fem.xmesh=meshextend(fem); fem.sol=femlin(fem, ... 'solcomp',{'Hz'}, ... 'outcomp',{'Hz'}); % posteval returns the electric field at each point eii_struct = posteval(fem,'normE_wh','dl',[1]); % Take the maximum field value and add it to emax eii_max = max(eii_struct.d); emax = [emax eii_max]; end simResults{indexCounter} = emax; indexCounter = indexCounter + 1; end end end

Using this approach I was able to automate process of finding the peak electric field intensity as a function of design parameters. Combined with another script I was able to run a simulation for about 3 days on end that tested 300+ different geometry combinations. And all I had to do was set it up, click a button, and put a note on the computer to not disturb it. Like I said, pretty awesome.

Cantera is a suite of object-oriented software tools for problems involving chemical kinetics, thermodynamics, and/or transport processes. It can be used from MATLAB, Python, C++, or Fortran. I used it for ME 140 (Combustion Engineering) at UC Berkeley in Fall 2005 to calculate things like adiabatic flame temperatures and equilibrium states for homework problems.

There isn't any documentation for installing Cantera on Mac OS X, and although it should be straight forward I encountered some problems and I want to try to fill that documentation void. Cantera is incredibly powerful software and the creator (Dave Goodwin) is helpful and active on the official newsgroup.

My system specifications are:

- Mac OS X 10.4.2
- XCode 2.0 developer tools (current version as of 9/13/05)
- Matlab 7 (R14)
- Cantera installed using the 1.6.0 binary release

I could not run the Cantera examples from Python or Matlab and was encountering the following types of errors:

>> run_examples EQUIL a chemical equilibrium example. This example computes the adiabatic flame temperature and equilibrium composition for a methane/air mixture as a function of equivalence ratio. Traceback (most recent call last): File "./.cttmp1125719147.pyw", line 1, in ? from ctml_writer import * ImportError: No module named ctml_writer ??? Cantera Error! Procedure: ct2ctml Error: could not convert input file to CTML. Command line was: sleep 1; python ./.cttmp1125719147.pyw &> ct2ctml.log Error in ==> XML_Node.XML_Node at 11 x.id = ctmethods(10,15,0,src); % newxml(name) Error in ==> Solution.Solution at 30 doc = XML_Node('doc',src); Error in ==> IdealGasMix at 39 s = Solution(a); Error in ==> equil at 12 gas = IdealGasMix('gri30.cti'); Error in ==> run_examples at 3 equil(0);

and from within Python

-----:/Applications/Cantera/demos/python/flames -----$ python flame1.py Traceback (most recent call last): File "flame1.py", line 7, in ? from Cantera import * ImportError: No module named Cantera

The error messages indicate that Python is not aware of the Cantera module. After some thought and googling I decided that I should check what directories Python is searching for its modules. The PYTHONPATH environment variable can be printed by importing sys and printing sys.path.

-----:/Applications/Cantera/demos/python/flames -----$ python Python 2.3.5 (#1, Mar 20 2005, 20:38:20) [GCC 3.3 20030304 (Apple Computer, Inc. build 1809)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> print sys.path ['', '/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python23.zip', '/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3', '/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/plat-darwin', '/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/plat-mac', '/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/plat-mac/lib-scriptpackages', '/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/lib-tk', '/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/lib-dynload', '/System/Library/Frameworks/Python.framework/Versions/2.3/lib/python2.3/site-packages', '/System/Library/Frameworks/Python.framework/Versions/2.3/Extras/lib/python']

As you can see above, the /Libary/Python/2.3/ (or, optionally I think, the /Applications/Cantera/bin/) directory are not being searched for modules, and it happens that the Cantera files are located in those directories. In order to search additional directories you can create .pth files in the current search path to add additional directories to the search path. There is a file called Extras.pth in the /Library/Python/2.3/site-packages/ directory which will do just this task and it's a simple task to add our new directories.

Open the Extras.pth file from the command line:

emacs /Library/Python/2.3/site-packages/Extras.pth

Edit it so that it looks like this:

/System/Library/Frameworks/Python.framework/Versions/2.3/Extras/lib/python /Library/Python/2.3/

And then save and close it with Control-X Control-S and Control-X Control-C

fetishroom