Pre-Grant Publication Number: 20070118712
Collaborate on the process of community review for this application. Posting will not be forwarded to the USPTO. Flagging a post as an ACTION ITEM signals further research. Flagging SPAM and ABUSE helps to manage discussion. Placing double brackets around a reference to a claim or prior art will create a hyperlink to the original ex. [[claim 1]] and [[prior art 2]].

Please review the Community Code of Conduct prior to posting

Discussion (15)
  Facilitator's Comment     Action Item
  Show without Noise
CLAIM 00001

<claim-text> A method for allocating memory freed by applications in a computer system having an operating system (O/S), said method comprising: <claim-text>a) designating a status of said one or more freed memory units previously associated with an application as available for reuse; </claim-text><claim-text>b) organizing one or more freed memory units having said available for reuse status into one or more free memory pools, wherein freed memory units in a pool are directly allocated to an application requiring backing physical memory store without the O/S deleting data in the freed memory units. </claim-text></claim-text>

Comments
Paul McKenney (about 1 year ago)
Regarding Claim 00001: See the paper entitled "Hoard: A Scalable Memory Allocator for Multithreaded Applications" (http://www.cs.umass.edu/~emery/hoard/asplos2000.pdf). This reference has already been associated with this patent application.

This was published in the ASPLOS 2000 conference (November 12-15 2000). A PDF may be found at http://www.cs.umass.edu/~emery/hoard/asplos2000.pdf Section 5: runs on computer system having operating system. Abstract: designates memory units. Figure 1 & Section 3.2: maintains per-thread heaps. Figure 2: data not deleted.

Therefore, this reference anticipates this claim.
more...

CLAIM 00002

<claim-text> The method as claimed in <claim-ref idref='CLM-00001'>claim 1</claim-ref>, further comprising maintaining information about said application and said free memory pools, said information used to directly allocate said freed memory units to an application. </claim-text>

Comments
Paul McKenney (about 1 year ago)
Regarding Claim 00002: Again, see the paper entitled "Hoard: A Scalable Memory Allocator for Multithreaded Applications" (http://www.cs.umass.edu/~emery/hoard/asplos2000.pdf). This reference has already been associated with this patent application (Prior Art Reference 45).

Figure 1 & Section 3.2 shows how heaps (groups of memory blocks) are associated with threads, which in turn on UNIX systems (as noted in Section 5) are associated with applications.

Therefore, Prior Art Reference 45 anticipates this claim.
more...

CLAIM 00024

<claim-text> The method as claimed in <claim-ref idref='CLM-00023'>claim 23</claim-ref>, wherein if it is determined that a memory page frame is not available for reuse in said free memory pool: <claim-text>determining if a memory page frame is available in a system wide free memory pool, and if a page frame is available in a system wide free memory pool, </claim-text><claim-text>deleting data included in said memory page frame; and, </claim-text><claim-text>allocating memory page frames from a system wide free memory pool for said process. </claim-text></claim-text>

Comments
Anthony Phillips (about 1 year ago)
Claim 00024 I think has prior art with virtual machine technology.

The Java Virtual Machine (JVM) Garbage Collector maintains free lists on a per thread basis and also VM-wide. Allocations are supplied from the per thread pool first so avoid lock contention, and failing that the thread can dip into the global (to the VM) memory pool. The step from this prior art to the claim in question seems trivial.
more...

CLAIM 00001

<claim-text> A method for allocating memory freed by applications in a computer system having an operating system (O/S), said method comprising: <claim-text>a) designating a status of said one or more freed memory units previously associated with an application as available for reuse; </claim-text><claim-text>b) organizing one or more freed memory units having said available for reuse status into one or more free memory pools, wherein freed memory units in a pool are directly allocated to an application requiring backing physical memory store without the O/S deleting data in the freed memory units. </claim-text></claim-text>

Comments
Ori Pomerantz (about 1 year ago)
Claim 1 is something mainframes have been doing for a long time. See the description of the GETMAIN macro, which may or may not initialize memory http://mkt.neonsys.com/neon/sampdata/getmain.htm . Memory needs to be initialized at allocation because it is not wiped at deallocation. more...
Michael Nichols (about 1 year ago)
Claim 1, as written, seems to miss the point of the invention. Reallocating used memory without erasing it first is not, per se, new. "malloc," by definition, means "Give me memory--no need for initialization" (as opposed to "calloc," which zeroes out the memory). This is precisely what a single-process system (think Turbo C++ for DOS) would do.

The point of this invention is that in a multi-process system, you ordinarily can't do what you would have done in DOS without posing a security risk. So instead, you usually make "malloc" work just like "calloc." This invention, however, finds a middle ground by splitting the "free pool" up into a global free pool and local "per-process" free pools: For the global free pool, malloc works like calloc, but if a process can be allocated memory out of its own pool, you can do malloc the old-fashioned way without worrying about security.

Unfortunately, Claim 1 does not distinguish between different kinds of free pools, so as written, it simply reads on the old-fashioned DOS-style malloc.
more...