Pre-Grant Publication Number: 20070118712
Collaborate on the process of community review for this application. Posting will not be forwarded to the USPTO. Flagging a post as an ACTION ITEM signals further research. Flagging SPAM and ABUSE helps to manage discussion. Placing double brackets around a reference to a claim or prior art will create a hyperlink to the original ex. [[claim 1]] and [[prior art 2]].

Please review the Community Code of Conduct prior to posting

Discussion (15)
  Facilitator's Comment     Action Item
  Show without Noise
7
Paul McKenney (about 1 year ago)
Regarding Claim 00002: Again, see the paper entitled "Hoard: A Scalable Memory Allocator for Multithreaded Applications" (http://www.cs.umass.edu/~emery/hoard/asplos2000.pdf). This reference has already been associated with this patent application (Prior Art Reference 45).

Figure 1 & Section 3.2 shows how heaps (groups of memory blocks) are associated with threads, which in turn on UNIX systems (as noted in Section 5) are associated with applications.

Therefore, Prior Art Reference 45 anticipates this claim.
6
Paul McKenney (about 1 year ago)
Regarding Claim 00001: See the paper entitled "Hoard: A Scalable Memory Allocator for Multithreaded Applications" (http://www.cs.umass.edu/~emery/hoard/asplos2000.pdf). This reference has already been associated with this patent application.

This was published in the ASPLOS 2000 conference (November 12-15 2000). A PDF may be found at http://www.cs.umass.edu/~emery/hoard/asplos2000.pdf Section 5: runs on computer system having operating system. Abstract: designates memory units. Figure 1 & Section 3.2: maintains per-thread heaps. Figure 2: data not deleted.

Therefore, this reference anticipates this claim.
5
WILLIAM SIMMONS (about 1 year ago)
Is there art within the collection which you see as clearly qualifying as prior art against claim 1? If so which art do you think discloses all of the features of claim 1? If not, do you have art that you wish to submit?
4
Michael Nichols (about 1 year ago)
Claim 1, as written, seems to miss the point of the invention. Reallocating used memory without erasing it first is not, per se, new. "malloc," by definition, means "Give me memory--no need for initialization" (as opposed to "calloc," which zeroes out the memory). This is precisely what a single-process system (think Turbo C++ for DOS) would do.

The point of this invention is that in a multi-process system, you ordinarily can't do what you would have done in DOS without posing a security risk. So instead, you usually make "malloc" work just like "calloc." This invention, however, finds a middle ground by splitting the "free pool" up into a global free pool and local "per-process" free pools: For the global free pool, malloc works like calloc, but if a process can be allocated memory out of its own pool, you can do malloc the old-fashioned way without worrying about security.

Unfortunately, Claim 1 does not distinguish between different kinds of free pools, so as written, it simply reads on the old-fashioned DOS-style malloc.
Manuel Perez (about 1 year ago)
I agree with Michael. Claim 1, as written, does not disclose the whole idea of the invention. Moreover it'is quite broad and a bit unclear. Consequently, there are many documents which can be used against the novelty of claim 1.
3
Anthony Phillips (about 1 year ago)
Claim 00024 I think has prior art with virtual machine technology.

The Java Virtual Machine (JVM) Garbage Collector maintains free lists on a per thread basis and also VM-wide. Allocations are supplied from the per thread pool first so avoid lock contention, and failing that the thread can dip into the global (to the VM) memory pool. The step from this prior art to the claim in question seems trivial.
2
Paul McKenney (about 1 year ago)
We might be misinterpreting Claim 1. The body of the patent talks not about memory allocation within a single address space, but rather the operating system's handling of memory released from the application that is later needed once more. See the discussion of Figure 1B in the patent body -- the point seems to be that the operating system keeps a per-process pool of pages that were released by the corresponding process, and that thus don't need to be zeroed if given back to that same process.
WILLIAM SIMMONS (about 1 year ago)
Are you suggesting that the reference cited by Ori falls outside of the scope of claim 1? Is the feature that you describe (the operating system's handling of memory released from the application that is later needed once more) a literal element of claim 1? Or of any claim?
Paul McKenney (about 1 year ago)
I am suggesting that the kneejerk reaction to the "GETMAIN" reference would be for the applicant to simply narrow the claim, possibly along the lines of claims 6 and 7. So it would be good to find a reference that anticipates the body of the patent, rather than just the wording of the claim itself.
WILLIAM SIMMONS (about 1 year ago)
You make a good point Paul. Ideally, the prior art reference would accomplish both goals -discloses what is claimed and what is disclosed in the body. Do you know of a reference that discloses the feature you find lacking by the GETMAIN reference? Ori seems convinced that GETMAIN discloses claim 1. Do the other peers think that claims 1 is disclosed by GETMAIN in light of Figure 1B?
Gerald McBrearty (about 1 year ago)
The dependent claim 6&7 i would guess bring in this concept but not that crisply.
Ori Pomerantz (about 1 year ago)
I still think they're identical.

In UNIX, each process has its own address space. What they're suggesting is that memory that is freed by the process and then reallocated to it again won't be zeroed out.

In z/OS, an address space can be used by multipe processes. However, there is still a paging mechanism that determines which virtual address goes to which real address, and whether that real address is in primary or secondary storage (= swap). Physical memory is wiped only when transferred bet
1
Ori Pomerantz (about 1 year ago)
Claim 1 is something mainframes have been doing for a long time. See the description of the GETMAIN macro, which may or may not initialize memory http://mkt.neonsys.com/neon/sampdata/getmain.htm . Memory needs to be initialized at allocation because it is not wiped at deallocation.
Rahan Uddin (about 1 year ago)
to make sure the relevant references get to the USPTO, and allow others to review/discuss, please UPLOAD into PRIOR ART (click "prior art/submit prior art")
Paul McKenney (about 1 year ago)
An "official IBM" version may be found at http://publib.boulder.ibm.com/infocenter/txformp/v6r0m0/topic/com.ibm.cics.te.doc/erziai00.pdf. See pages 106 and 108 of the PDF. I am uploading as Rahan requested, but is >6MB PDF...