Home | Trees | Indices | Help |
|
---|
|
object --+ | CacheAbstract --+ | CacheInRam
Ram based caching
This is implemented as global (per process, shared by all threads) dictionary. A mutex-lock mechanism avoid conflicts.
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
Inherited from Inherited from |
|
|||
locker = <thread.lock object at 0x2944450>
|
|||
meta_storage =
|
|||
Inherited from |
|
|||
Inherited from |
|
Attention! cache.ram does not copy the cached object. It just stores a reference to it. Turns out the deepcopying the object has some problems: - would break backward compatibility - would be limiting because people may want to cache live objects - would work unless we deepcopy no storage and retrival which would make things slow. Anyway. You can deepcopy explicitly in the function generating the value to be cached.
|
Initializes the object Args: request: the global request object
|
Clears the cache of all keys that match the provided regular expression. If no regular expression is provided, it clears all entries in cache. Args: regex: if provided, only keys matching the regex will be cleared, otherwise all keys are cleared.
|
Increments the cached value for the given key by the amount in value Args: key(str): key for the cached object to be incremeneted value(int): amount of the increment (defaults to 1, can be negative)
|
Home | Trees | Indices | Help |
|
---|
Generated by Epydoc 3.0.1 on Sun Mar 16 02:36:11 2014 | http://epydoc.sourceforge.net |