Home     My Faqts     Contributors     About     Help    

faqts : Computers : Programming : Languages : Python

FAQTs repaired & updated!
Thanks for your patience...
Entry Add Entry Alert - Edit this Entry

Did You Find This Entry Useful?

4 of 4 people (100%) answered Yes
Recently 4 of 4 people (100%) answered Yes

I get a MemoryError using <file>.read on an AIX machine with lots of memory. How can I use more?

Jul 23rd, 2002 00:48

Michael Chermside, Markus Indenbirken, Seth Grimes

Well, just buy more memory! <wink>
The problem here is almost certainly due to your having tried to read
the entire file into memory at once. Most likely, the solution is for
you to read and process it bit by bit, never keeping the entire thing in
memory at once. If you think that might work for you, keep reading. (If
not, another option might be to use a memory-mapped file. See the 
documentation on the `mmap´ module.)
If you have a file object, f:
>>> f = file('readme.txt')
there are several ways you can try to read the contents. If you want to
read the entire contents into a string, works like this:
>>> wholeFile = f.read()
But, as we said above, that may be too big. You can supply a maximum
size if you like:
>>> first1K = f.read(1024)
and if you do this in a loop you can just keep reading through the file
in chunks. When you reach the end of the file, you will get a chunk
which is smaller than 1024 (size 0 when actually at the EOF).
If you have a text file, you can (and probably should) read it in line
by line. Calling the readlines method:
>>> allLines = f.readlines()
will return all of the lines as one big array... useful sometimes, but
it still keeps it all in memory at once. Instead, try using:
>>> for line in f.xreadlines():
>>>     process( line )
The difference is that the lines are read in one-by-one on demand.
But the ***BEST*** way to do line-by-line processing requires Python 2.2
or higher. It's fast, and it's really easy to type:
>>> for line in f:
>>>    process( line )

© 1999-2004 Synop Pty Ltd