블로그 이미지
News and Announcements (at) Apache Software Foundation. 노안돼지
Apache Software Foundation The Apache User Group KLDP From download

Recent Post»

Recent Comment»

Recent Trackback»

Archive»

« 2024/3 »
1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
31

아파치 소프트웨어 재단은 아파치 오픈 소스 소프트웨어 프로젝트 커뮤니티 지원을 제공합니다.
아파치 프로젝트는 협업과 개발 프로세스를 기반으로 하는 상호간의 공감대와 개방되어 있는 실용적인 소프트웨어 라이센스, 그 분야에서 선두를 달릴 수 있는 고품질 소프트웨어 개발을 추구하고 있습니다.

우리는 심플한 서버 공유 프로젝트의 모임이라고도 하지만 오히려 개발자와 사용자간의 커뮤니티라고 생각합니다.

Apache Jackrabbit 2.4.1 릴리즈

뉴스/소식 | 2012. 4. 4. 08:59 | Posted by 노안돼지

The Apache Jackrabbit community is pleased to announce the release of Apache Jackrabbit 2.4.1. The release is available for download at:


    http://jackrabbit.apache.org/downloads.html


See the full release notes below for details about this release.



Release Notes -- Apache Jackrabbit -- Version 2.4.1


Introduction

------------


This is Apache Jackrabbit(TM) 2.4, a fully compliant implementation of the Content Repository for Java(TM) Technology API, version 2.0 (JCR 2.0) as specified in the Java Specification Request 283 (JSR 283).



Apache Jackrabbit 2.4.1 is patch release that contains fixes and improvements over Jackrabbit 2.4.0. This release is fully compatible with earlier 2.x.x releases.


Changes since Jackrabbit 2.4.0

------------------------------


Improvements


  [JCR-3237] add missing name constants for mix:title

  [JCR-3254] make max size of CachingEntryCollector's cache configurable

  [JCR-3255] Access cluster node id

  [JCR-3259] augment logging information around CachingEntryCollector

  [JCR-3280] SQL2 joins on empty sets are not efficient


Bug fixes


  [JCR-3158] Deadlock in DBCP when accessing node

  [JCR-3227] VolatileIndex not closed properly

  [JCR-3236] Can not instantiate lucene Analyzer in SearchIndex

  [JCR-3247] SQL2 ISDESCENDANTNODE BooleanQuery#TooManyClauses returns

  [JCR-3250] webapp welcome page shows incorrect port when port is the ...

  [JCR-3261] Problems with BundleDbPersistenceManager getAllNodeIds

  [JCR-3266] JCR-SQL2 query with multiple columns in result only returns ...

  [JCR-3268] Re-index fails on corrupt bundle

  [JCR-3270] Error instantiating lucene search index in Turkish Regional ...


Changes since Jackrabbit 2.2.0

------------------------------


New features


  [JCR-2859] Make open scoped locks recoverable

  [JCR-2936] JMX Bindings for Jackrabbit

  [JCR-3005] Make it possible to get multiple nodes in one call via davex

  [JCR-3040] JMX Stats for the Session

  [JCR-3117] Stats for the PersistenceManager

  [JCR-3118] Configurable actions upon authorizable creation and removal

  [JCR-3124] Stats for Queries

  [JCR-3140] Add configurable hook for password validation

  [JCR-3154] Stats for Queries continued

  [JCR-3183] Add memory based bundle store


Improvements


  [JCR-1443] Make JCAManagedConnectionFactory non final, so it can be extended

  [JCR-2798] JCAManagedConnectionFactory should chain cause exception

  [JCR-2887] Split PrivilegeRegistry in a per-session manager instance ...

  [JCR-2906] Multivalued property sorted by last/random value

  [JCR-2989] Support for embedded index aggregates

  [JCR-3017] Version history recovery fails in case a version does not ...

  [JCR-3030] Permit using different tablespaces for tables and indexes ...

  [JCR-3084] Script for checking releases

  [JCR-3085] better diagnostics when version storage is broken

  [JCR-3091] Lucene Scorer implementations should handle the 'advance' ...

  [JCR-3098] Add hit miss statistics and logging to caches

  [JCR-3102] InternalVersion.getFrozenNode confused about root version?

  [JCR-3107] Speed up hierarchy cache initialization

  [JCR-3109] Move PersistenceManagerTest from o.a.j.core to o.a.j.core....

  [JCR-3114] expose PM for versioning manager so that the consistency ...

  [JCR-3119] Improve aggregate node indexing code

  [JCR-3120] Change log level in UserManagerImpl#getAuthorizable(NodeImpl) ...

  [JCR-3122] QueryObjectModelImpl should execute queries as SessionOperation(s)

  [JCR-3127] Upgrade to Tika 0.10

  [JCR-3129] It should be possible to create a non-transient Repository ...

  [JCR-3132] Test tooling updates

  [JCR-3133] Query Stats should use the TimeSeries mechanism

  [JCR-3135] Upgrade to Logback 1.0

  [JCR-3136] Add m2e lifecycle mappings for Eclipse Indigo

  [JCR-3138] Skip sync delay when changes are found

  [JCR-3141] Upgrade to Tika 1.0

  [JCR-3142] Create OSGi Bundles from jackrabbit-webdav and ...

  [JCR-3143] SessionImpl#isSupportedOption: Skip descriptor evaluation ...

  [JCR-3146] Text extraction may congest thread pool in the repository

  [JCR-3161] Add JcrUtils.getPropertyTypeNames

  [JCR-3162] Index update overhead on cluster slave due to JCR-905

  [JCR-3165] Consolidate compare behaviour for Value(s) and Comparable(s)

  [JCR-3167] Make Jackrabbit compile on Java 7

  [JCR-3170] Precompile JavaCC parsers in jackrabbit-spi-commons

  [JCR-3172] implement PERSIST events for the EventJournal

  [JCR-3177] Remove jdk 1.4 restriction for jcr-tests

  [JCR-3178] Improve error messages for index aggregates

  [JCR-3184] extend ConsistencyChecker API to allow adoption of orphaned ...

  [JCR-3185] refactor consistency checks in BundleDBPersistenceManager ...

  [JCR-3199] workspace-wide default for lock timeout

  [JCR-3200] consistency check should get node ids in chunks, not rely on ...

  [JCR-3202] AuthorizableImpl#memberOf and #declaredMemberOf should ...

  [JCR-3203] GroupImp#getMembers and #getDeclaredMembers should return ...

  [JCR-3222] Allow servlet filters to specify custom session providers


Bug fixes


  [JCR-2539] spi2dav: Observation's user data not property handled

  [JCR-2540] spi2dav : move/reorder not properly handled by observation

  [JCR-2541] spi2dav : EventJournal not  implemented

  [JCR-2542] spi2dav: EventFilters not respected

  [JCR-2543] spi2dav : Query offset not respected

  [JCR-2774] Access control for repository level API operations

  [JCR-2892] Large fetch sizes have potentially deleterious effects on ...

  [JCR-2930] same named child nodes disappear on restore

  [JCR-3082] occasional index out of bounds exception while running ...

  [JCR-3086] potential infinite loop around InternalVersionImpl.getSuccessors

  [JCR-3089] javax.jcr.RepositoryException when a JOIN SQL2 query is ...

  [JCR-3090] setFetchSize() fails in getAllNodeIds()

  [JCR-3093] Inconsistency between Session.getProperty and Node....

  [JCR-3095] Move operation may turn AC caches stale

  [JCR-3101] recovery tool does not recover when version history can ...

  [JCR-3105] NPE when versioning operations are concurrent

  [JCR-3108] SQL2 ISDESCENDANTNODE can throw BooleanQuery#...

  [JCR-3110] QNodeTypeDefinitionImpl.getSerializablePropertyDefs() ...

  [JCR-3111] InternalVersionManagerBase; missing null check after getNode()

  [JCR-3112] NodeTypeDefDiff.PropDefDiff.init() constraints change check ...

  [JCR-3115] Versioning fixup leaves persistence in a state where the ...

  [JCR-3116] Cluster Node ID should be trimmed

  [JCR-3126] The CredentialsWrapper should use a empty String as userId ...

  [JCR-3128] Problem with formerly escaped JCR node names when upgrading ...

  [JCR-3131] NPE in ItemManager when calling Session.save() with nothing ...

  [JCR-3139] missing sync in InternalVersionManagerImpl.externalUpdate ...

  [JCR-3148] Using transactions still leads to memory leak

  [JCR-3149] AccessControlProvider#getEffectivePolicies for a set of ...

  [JCR-3151] SharedFieldCache can cause a memory leak

  [JCR-3152] AccessControlImporter does not import repo level ac content

  [JCR-3156] Group#getMembers may list inherited members multiple times

  [JCR-3159] LOWER operand with nested LOCALNAME operand not work with SQL2

  [JCR-3160] Session#move doesn't trigger rebuild of parent node aggregation

  [JCR-3163] NPE in RepositoryServiceImpl.getPropertyInfo()

  [JCR-3174] Destination URI should be normalized

  [JCR-3175] InputContextImpl: cannot upload file larger than 2GB

  [JCR-3176] JCARepositoryManager does not close InputStream

  [JCR-3189] JCARepositoryManager.createNonTransientRepository throws NPE ...

  [JCR-3194] ConcurrentModificationException in CacheManager.

  [JCR-3195] wrong assumptions in test cases about lock tokens

  [JCR-3198] Broken handling of outer join results over davex

  [JCR-3205] Missing support for lock timeout and ownerHint in jcr-server

  [JCR-3210] NPE in spi2dav when server does not send all headers

  [JCR-3214] [Lock] weird number for "infinite"

  [JCR-3216] When fetching node ids in checks for the checker all ...

  [JCR-3218] UserImporter should trigger execution AuthorizableActions ...

  [JCR-3220] simple webdav server does not support lock timeouts

  [JCR-3223] Disallow unregistering of node types still (possibly) in use

  [JCR-3224] SystemSession#createSession should return SessionImpl again

  [JCR-3225] ConcurrentModificationException in QueryStatImpl


In addition to the above-mentioned changes, this release contains all the changes included up to the Apache Jackrabbit 2.2.0 release.


For more detailed information about all the changes in this and other Jackrabbit releases, please see the Jackrabbit issue tracker at


    https://issues.apache.org/jira/browse/JCR


Release Contents

----------------


This release consists of a single source archive packaged as a zip file.

The archive can be unpacked with the jar tool from your JDK installation.

See the README.txt file for instructions on how to build this release.


The source archive is accompanied by SHA1 and MD5 checksums and a PGP signature that you can use to verify the authenticity of your download.

The public key used for the PGP signature can be found at https://svn.apache.org/repos/asf/jackrabbit/dist/KEYS.


About Apache Jackrabbit

-----------------------


Apache Jackrabbit is a fully conforming implementation of the Content Repository for Java Technology API (JCR). A content repository is a hierarchical content store with support for structured and unstructured content, full text search, versioning, transactions, observation, and more.


For more information, visit http://jackrabbit.apache.org/


About The Apache Software Foundation

------------------------------------


Established in 1999, The Apache Software Foundation provides organizational, legal, and financial support for more than 100 freely-available, collaboratively-developed Open Source projects. The pragmatic Apache License enables individual and commercial users to easily deploy Apache software; the Foundation's intellectual property framework limits the legal exposure of its 2,500+ contributors.


For more information, visit http://www.apache.org/


:

Apache Libcloud 0.9.1 릴리즈

뉴스/소식 | 2012. 4. 3. 08:59 | Posted by 노안돼지

Libcloud team is pleased to announce the release of Libcloud 0.9.1!

 Release highlights:

 - A lot of improvements and additional functionality in the OpenStack driver. Now a generic OpenStack driver (Provider.OPENSTACK) also works with devstack.org and trystack.org installations

- Improvements and better exception propagation in the deploy_node method

- New driver for ElasticHosts Los Angeles and Toronto location

- Support for new EC2 instance type - m1.medium

 

Bug fixes:

 - Don't lowercase special header names in the Amazon S3 storage driver. This fixes a bug with multi-objects delete calls.

- Properly handle OpenStack providers which return public IP addresses under the 'internet' key in the 'addresses' dictionary

- Make create_node in Linode driver return a Node instance instead of a listen of Node instances

 For a full list of changes, please see the CHANGES file <https://svn.apache.org/viewvc/libcloud/tags/0.9.1/CHANGES?revision=r1307716&view=markup>.

 Download

 Libcloud 0.9.1 can be downloaded from http://libcloud.apache.org/downloads.html or installed using pip:

 

pip install apache-libcloud

 It is possible that the file hasn't been synced to all the mirrors yet. If this is the case, please use the main Apache mirror - http://www.apache.org/dist/libcloud

 

Upgrading

 If you have installed Libcloud using pip you can also use it to upgrade it:

 

pip install --upgrade apache-libcloud

 Upgrade notes

 A page which describes backward incompatible or semi-incompatible changes and how to preserve the old behavior when this is possible can be found at http://libcloud.apache.org/upgrade-notes.html.

 Documentation

 API documentation can be found at http://libcloud.apache.org/apidocs/0.9.1/.

 Bugs / Issues

 If you find any bug or issue, please report it on our issue tracker <https://issues.apache.org/jira/browse/LIBCLOUD>. Don't forget to attach an example and / or test which reproduces your problem.

 Thanks

 Thanks to everyone who contributed and made this release possible! Full list of people who contributed to this release can be found in the CHANGES file <https://svn.apache.org/viewvc/libcloud/tags/0.9.1/CHANGES?revision=r1307716&view=markup>.

Enjoy!

 

:

The Apache Software Foundation Announces Apache Sqoop as a Top-Level Project

[this announcement is also available online at http://s.apache.org/mU]

Open Source big data tool used for efficient bulk transfer between Apache Hadoop and structured datastores.

Forest Hill, MD --The Apache Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of nearly 150 Open Source projects and initiatives, today announced that Apache Sqoop has graduated from the Apache Incubator to become a Top-Level Project (TLP), signifying that the Project’s community and products have been well-governed under the ASF's meritocratic process and principles.

 Designed to efficiently transfer bulk data between Apache Hadoop and structured datastores such as relational databases, Apache Sqoop allows the import of data from external datastores and enterprise data warehouses into Hadoop Distributed File System or related systems like Apache Hive and HBase.

 "The Sqoop Project has demonstrated its maturity by graduating from the Apache Incubator," explained Arvind Prabhakar, Vice President of Apache Sqoop. "With jobs transferring data on the order of billions of rows, Sqoop is proving its value as a critical component of production environments."

 Building on the Hadoop infrastructure, Sqoop parallelizes data transfer for fast performance and best utilization of system and network resources. In addition, Sqoop allows fast copying of data from external systems to Hadoop to make data analysis more efficient and mitigates the risk of excessive load to external systems. 

 "Connectivity to other databases and warehouses is a critical component for the evolution of Hadoop as an enterprise solution, and that's where Sqoop plays a very important role" said Deepak Reddy, Hadoop Manager at Coupons.com. "We use Sqoop extensively to store and exchange data between Hadoop and other warehouses like Netezza. The power of Sqoop also comes in the ability to write free-form queries against structured databases and pull that data into Hadoop."

 "Sqoop has been an integral part of our production data pipeline" said Bohan Chen, Director of the Hadoop Development and Operations team at Apollo Group. "It provides a reliable and scalable way to import data from relational databases and export the aggregation results to relational databases."

 Since entering the Apache Incubator in June 2011, Sqoop was quickly embraced as an ideal SQL-to-Hadoop data transfer solution. The Project provides connectors for popular systems such as MySQL, PostgreSQL, Oracle, SQL Server and DB2, and also allows for the development of drop-in connectors that provide high speed connectivity with specialized systems like enterprise data warehouses.

 Craig Ling, Director of Business Systems at Tsavo Media, said "We adopted the use of Sqoop to transfer data into and out of Hadoop with our other systems over a year ago. It is straight forward and easy to use, which has opened the door to allow team members to start consuming data autonomously, maximizing the analytical value of our data repositories."

 Availability and Oversight

Apache Sqoop software is released under the Apache License v2.0, and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. Apache Sqoop source code, documentation, mailing lists, and related resources are available at http://sqoop.apache.org/.

 

About The Apache Software Foundation (ASF) Established in 1999, the all-volunteer Foundation oversees nearly one hundred fifty leading Open Source projects, including Apache HTTP Server — the world's most popular Web server software. Through the ASF's meritocratic process known as "The Apache Way," more than 350 individual Members and 3,000 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation's official user conference, trainings, and expo. The ASF is a US 501(3)(c) not-for-profit charity, funded by individual donations and corporate sponsors including AMD, Basis Technology, Cloudera, Facebook, Google, IBM, HP, Hortonworks, Matt Mullenweg, Microsoft, PSW Group, SpringSource/VMware, and Yahoo!. For more information, visit  http://www.apache.org/.

 

"Apache", "Apache Sqoop", and "ApacheCon" are trademarks of The Apache Software Foundation. All other brands and trademarks are the property of their respective owners.

 

#  #  #

 

= = = = =

Boston +1 617 921 8656

New York +1 917 725 2133

London +44 (0) 20 3239 9686

skype sallykhudairi

: