Friday, May 25, 2007

ACEGI Authentication Provider Examples

Acegi provides a very flexible way to configure the authentication provider. By default it provides two implementations of the authentication provider

- InMemoryDaoImpl : Retrieves user details from an in-memory list created by the bean context. So basically the list of users and their passwords are specified in the bean configuration file. (http://www.acegisecurity.org/multiproject/acegi-security/apidocs/org/acegisecurity/userdetails/memory/InMemoryDaoImpl.html). See below for sample bean configuration

<bean id="inMemoryDaoImpl" class="org.acegisecurity.userdetails.memory.InMemoryDaoImpl">
<property name="userMap">
<value>
marissa=koala,ROLE_TELLER,ROLE_SUPERVISOR
dianne=emu,ROLE_TELLER
scott=wombat,ROLE_TELLER
peter=opal,disabled,ROLE_TELLER
</value>
</property>
</bean>


- JdbcDaoImpl : Retrieves user details (username, password, enabled flag, and authorities) from a JDBC location. A default database structure is assumed, which most users of this class will need to override, if using an existing schema. This may be done by setting the default query strings used. If this does not provide enough flexibility, another strategy would be to subclass this class and override the MappingSqlQuery instances used, via the initMappingSqlQueries() extension point. (http://www.acegisecurity.org/multiproject/acegi-security/apidocs/org/acegisecurity/userdetails/jdbc/JdbcDaoImpl.html). See below for code sample to configure using a jdbc driver or a datasource. Irrespective of the database used and how a DataSource is obtained, a standard schema must exist in the database

-- USING JDBC Driver
<bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName"><value>org.hsqldb.jdbcDriver</value></property>
<property name="url"><value>jdbc:hsqldb:hsql://localhost:9001</value></property>
<property name="username"><value>sa</value></property>
<property name="password"><value></value></property>
</bean>

-- Using Datasource
<bean id="jdbcDaoImpl" class="org.acegisecurity.userdetails.jdbc.JdbcDaoImpl">
<property name="dataSource"><ref bean="dataSource"/></property>
</bean>


- Custom implementation : The above two implementations basically implement the UserDetailsService interface. If you have complex needs (such as a special schema or would like a certain UserDetails implementation returned), you'd be better off writing your own UserDetailsService(http://www.acegisecurity.org/multiproject/acegi-security/apidocs/index.html?org/acegisecurity/userdetails/UserDetailsService.html).

Code Sample:

<bean id="authenticationManager" class="org.acegisecurity.providers.ProviderManager">
<property name="providers">
<list>
<ref local="daoAuthenticationProvider"/>
</list>
</property>
</bean>

<bean id="daoAuthenticationProvider" class="org.acegisecurity.providers.dao.DaoAuthenticationProvider">
<property name="userDetailsService"><ref bean="UserService"/></property>
</bean>

<bean id="UserService" class = "com.icrossing.xxx.CustomAuthenticationProvider"/>

-- Sample implementation

class CustomAuthenticationProvider implements org.acegisecurity.userdetails.UserDetailsService
{
public UserDetails loadUserByUsername(String userId) throws UsernameNotFoundException, DataAccessException
{
User user = null;
GrantedAuthority[] grantedAuthorities = null;
try {
user = getUserDAO().lookupUser(userId);

if(user==null) {
throw new UsernameNotFoundException("Invalid User");
}

Set roles = user.getRoles();
int i = 0;
grantedAuthorities = new GrantedAuthority[roles.size()];
for (Iterator iter = roles.iterator(); iter.hasNext(); i++) {
Role role = (Role) iter.next();

GrantedAuthority authority = new GrantedAuthorityImpl(role.getRole());
grantedAuthorities[i] = authority;
}
} catch (DataStoreException e) {
throw new DataRetrievalFailureException("Cannot loadUserByUsername userId:"+userId+ " Exception:" + e.getMessage(), e);
}

UserDetails userDetails = new org.acegisecurity.userdetails.User(
user.getUserId(),
user.getPassword(),
user.isEnabled(), //enabled
user.isEnabled(), //accountNonExpired
user.isEnabled(), //credentialsNonExpired
user.isEnabled(), //accountNonLocked
grantedAuthorities
);
return userDetails;
}
}

FTPClient Default Buffer Policy

Just found this while researching the buffer policy of org.apache.commons.net.ftp.FTPClient (Apache commons net FTP).

Methods storeFile() and retrieveFile() in FTPClient use a default buffer size of 1024 (http://jakarta.apache.org/commons/net/apidocs/org/apache/commons/net/io/Util.html#DEFAULT_COPY_BUFFER_SIZE).

Methods storeFileAsStream() and retrieveFileStream() do not use a default buffer when file type is BINARY however when file type is ASCII they use a default buffer of 1024. Here's the developer's comment as to why

// We buffer ascii transfers because the buffering has to
// be interposed between ToNetASCIIOutputSream and the underlying
// socket output stream. We don't buffer binary transfers
// because we don't want to impose a buffering policy on the
// programmer if possible. Programmers can decide on their
// own if they want to wrap the SocketOutputStream we return
// for file types other than ASCII.

Tuesday, May 22, 2007

FTPClient timeout values

In looking at the docs for org.apache.commons.net.ftp.FTPClient there are three timeouts which can be configured:

setDefaultTimeout : Set the default timeout in milliseconds to use when opening a socket. This value is only used previous to a call to connect() and should not be confused with setSoTimeout() which operates on an the currently opened socket.

setSoTimeout : Set the timeout in milliseconds of a currently open connection. Only call this method after a connection has been opened by connect().

setDataTimeout : Sets the timeout in milliseconds to use when reading from the data connection. This timeout will be set immediately after opening the data connection.

This seemed confusing so I went ahead and peeked at the source code for FTPClient and the whole thing made sense. So basically the FTPClient uses the underlying java.net.Socket and the various timeouts apply at the various stages of socket usage.

if the setDefaultTimeout is set then the underlying java.net.Socket.setSoTimeout() is set and is used default for all connections made using this FTPClient instance. It basically saves you the trouble of calling setSoTimeout after every connection establishment.

if setSoTimeout is set then the underlying java.net.Socket.setSoTimeout() is set for the current connection and at disconnect() the value reverts back to the defaultTimeout set using the setDefaultTimeout. If you call it before connecting, you'll get a NullPointerException

if setDataTimeout is set then the underlying java.net.Socket.setSoTimeout() is set before a read is performed and after the read completion the timeout value is restored to the pre-read state so basically should be called before a data connection is established (e.g., a file transfer) because it doesn't affect an active data connection. Usually when a read() method tries to read data from a socket the program will block until the data arrives. However, if you set the timeout property, the read() will only wait the specified number of milliseconds. Then, if no data was received, it will throw an InterruptedIOException. The data timeout specified blocks for each socket read() call and is not cumulative of all read calls.

It seems obvious that defaultTimeout will suffice the purpose however there might be need have read specific dataTimeouts (e.g., you don't want a 2 gb file transfer to die just because there is a 10 minute loss of connectivity)...

On another note what value is optimal to set for timeout given its implications at various stages? Online research recommends: timeout for connect : 5 secs, write/reads : 120 secs

Update: 08/23/2007
So the interesting fact is after all the babbling above I was not able to make the above timeouts to work So here's what I tried: I have a file of size 57mb and I tried setting various combinations of timeouts for upload:

- set defaultTimeout (120 secs) and dataTimeout (1200 secs) before establishing login connection --> Result: upload failed with timeout

- set defaultTimeout (120 secs) before login connection and dataTimeout (1200 secs) after login connection (1200 secs) - Result: upload failed with timeout

- set defaultTimeout (120 secs) before login connection and after login connection (1200 secs) connection --> Result: upload failed with timeout

- set defaultTimeout (1200 secs) before login connection --> Result: upload succeeded

So I tried the above @ various times to make sure I am not dealing with a nework spike or any of that sort and got the same results. I will update once I find more on this stuff.

Wednesday, May 02, 2007

OutOfMemory issue in JUnit

Recently one of my colleague encountered this issue when running a JUnit test. Inspite of increasing the -vm settings the error kept coming. On further research we found that if JUnit is set with "fork=true" the task will be executed in forked VM. So the memory settings of default VM will not effective. So, you need to set maxmemory attribute of JUnit to avoid OutOfMemory? exception.

Example:

<target name="test.class.inner" if="test.class">
      <echo message="test.classpath"/>
      <mkdir dir="${test.output.dir}"/>
      <mkdir dir="${build.dir}/tmp"/>
      <junit dir="${build.dir}" haltonfailure="yes" haltonerror="yes" printsummary="on"
            fork="true" filtertrace="true" '''maxmemory="1024m"'''>
      <sysproperty key="merchantize.env" value="test"/>
....