What's new at the CTserver site?
- Questions about CTserver software or webservices? Visit the forum.
- Silicate melts H2O-CO2 mixed fluid solubility calculator of Papale et al. (2006) - online and Excel version!
- Fe-Ti oxide geothermometer-oxybarometer of Ghiorso and Evans (2008) - online and Excel version! Updated version calculates activity of TiO2 in liquid coexisting with oxides.
- Olivine-spinel-orthopyroxene geothermometer-oxybarometer.
- Berman (1988) thermodynamic properties calculator.
- What is the purpose of this site?
- What is computational thermodynamics?
- Why do we need a computational thermodynamics server?
- What are the distributed computing software models promoted at this site?
The purpose of this site is to distribute software being developed at OFM Research Inc. and the University of Washington under an NSF funded project (EAR 0609680) whose aim is to promote a distributed computing environment in computational thermodynamics for petrology and geochemistry. Software clients that access the server can be run from this site. Source code for these clients can be downloaded and used as models for user-written clients. Documentation on server functionality is provided as well.
If you are new to this site we suggest that you read the sections below and then try one of the clients (the Phase Properties applet or the MELTS applet). To learn more about the nuts and bolts of how the server interacts with the clients consult the documentation page where you will find a complete description of server functionality and source code for clients coded in python, C++ and Java.
If you are interested using computational thermodynamics web services on your site, please see the documentation provided on our web services page.
Computational thermodynamics (CT) is a tool that is used with increasing frequency by researchers and students of petrology and geochemistry. Applications have progressed beyond simple geothermometers and geobarometers, to increasingly complex calculations of the thermodynamic properties of materials, the generation of phase diagrams and pseudo-sections, and the modeling of the chemical evolution of a system along geologically important irreversible reaction paths. Furthermore, as the increased availability of computational resources permit fluid dynamical modeling to incorporate realistic and compositionally dependent materials properties, the desire to couple CT with dynamical models has become both feasible and desirable.
CT calculations are relatively straightforward for materials with simple thermodynamic properties, like ideal solutions. As the underlying thermodynamic models for the phases involved become more complex however, calculations can become exceedingly difficult, requiring sophisticated methods of numerical analysis. This difficulty stems from the fact that CT calculations are essentially explorations of the geometrical properties of multidimensional surfaces that describe the energy of a phase as a function of composition, temperature and pressure. The more non-ideal the behavior, the more hills and valleys in the energy surface and the more difficult it is numerically to wander about in this multidimensional space locating minima and maxima and finding tangent planes that establish equilibrium relations between phases in the system. CT is really an exercise in computational geometry. For petrologic applications involving mineral phases that characteristically exhibit solvi, phase transitions, and temperature- and compositionally-dependent cation ordering, the geometrical complexity of the energy surface can make routine calculations very cumbersome.
One consequence of this energetic complexity is that software development for CT calculations of petrologically important phases requires considerable care and necessitates extensive testing. The student or researcher that simply wants to utilize CT results in the course of their work often has not the time or expertise to implement the algorithms and may very well lack access to the computer hardware necessary to carry out the calculation. For these reasons we began an NSF funded project in 2002 (EAR/ITR 0401510) with the aim of creating a distributed computing hardware/software infrastructure to support CT applications in petrology and geochemistry. We decided to follow two design objectives in this project. Software would be delivered to the user using a client-server model, with algorithms optimized for the server platform and clients communicating over the Internet with this server. Secondly, the clients would be either user-friendly web-based applications that require no additional locally installed software or, user-written applications that communicate with the server utilizing an established standardized language-independent communication protocol. In this way all levels of user sophistication and need can be accommodated with reliable and tested CT functionality provided to the widest clientele.
The distributed computing models developed and implemented in this project have three design objectives. First, the communication protocol must be language independent so that clients can access the functionality of the server free of the constraint (or indeed the knowledge) of exactly how the computations are executed. Secondly, the server compute model must be extensible to a multi-processor environment so that client demand can be accommodated with minimal reconfiguration or recoding. Thirdly, the client-server communication protocol must be standardized, requiring minimal or no additional software installation at the client end, and implemented with minimal overhead to reduce the bandwidth of network communication. The first software model/protocol that we have adopted to accomplish these goals is based on CORBA. CORBA stands for Common Object Request Broker Architecture. The CORBA standard is maintained by an industry consortium under the auspices of the Object Management Group (www.omb.org). CORBA serves as a translation and transport layer between computational resources and functions on a server, and client applications that use those resources. The beauty of CORBA is that client and server programs need not be on the same computer, the same operating system or in the same programming language. Language and architecture translation and network transport are handled seamlessly in a process that is invisible to the user. CORBA has a long established and successful history of use in distributed commuting models in both science and industry, and meets our design criteria in all aspects.
Schematic of CORBA-based distributed computing model:
The CORBA environment we have implemented for our CT server project is illustrated in the accompanying figure. The protocol functions in the following manner. A set of server “capabilities” are defined in an Interface Definition Language (IDL) that is published and made available to clients. These capabilities are in practice definitions of how to access and retrieve server computations of phase properties, of equilibrium assemblages, etc. On the server end, the IDL definitions are translated into the preferred coding language using an IDL compiler distributed as part of the CORBA platform implementation. IDL is an ISO standard with language mappings for most popular programming languages. We use an IDL to C++ compiler that comes with the public domain CORBA implementation of OmniORB (omniORB.sourceforge.net), which is an open source implementation originally developed by AT&T.
A client application that generally runs on some remote platform, is coded in an appropriate language and calls to server methods are made by translating (compiling) the IDL into that language. The client accesses the server by calling methods much in the same way library functions are accessed on the local machine. The Internet connection, transport, and translation of requests and returned results is accomplished by the local and remote ORB (Object Request Broker) that comes with a CORBA implementation like omniORB. The principal language we have adopted to program our clients is Java because it is widely available, it is relatively easy to design portable Graphical User Interfaces (GUI) in this language, and because Java comes with built-in CORBA technology. In order to minimize the need for users on remote platforms to install their own CORBA communication layer, our clients are distributed as Web-browser applets. This is because all modern browsers have CORBA communication readily available as part of their run-time Java execution environment. So, in practice a user can simply go to the CT server web site, download a Java applet, interact with that applet and transparently retrieve results (computations) from the remote server.
One important advantage of this distributed computing infrastructure over less sophisticated client-server protocols is that a user can write their own client by installing a CORBA implementation and translating the published IDL server method specification into whatever computer language (e.g., python, C++) suits the demands of their application. An additional advantage is that the Internet communication protocol used by CORBA (designated as IIOP) is designed to minimize bandwidth. On the server side, the software that listens for client contacts can interact with multiple independent server processes that can potentially run on multiple nodes of a compute cluster delivering parallel computing performance.
The second distributed computing protocol that we have adopted for dissemination of CTserver resources is based on remote procedure calls (RPC) implemented using anXML- and HTTP-based communication protocol. These procedures are commonly referred to as web services. We have adopted the SOAP standard and implemented these services as a Java fron end (C++ and C compute engine) on the server side utilizing a Tomcat web server on port 8080. Web services deliver much of the same server-side capabilities as the CORBA-based libraries described above, but are often easier to implement for web-based forms. See the web services link to access documentation on WSDLs and general capabilities of the CT web services we provide.
The hardware that we chose for server side computations are Apple Macintosh Xserve machines running the OS X operating system (10.4.3 as of this writing). This platform was chosen for ease of configuration and computational stability. CT calculations involve numerical optimization that demands a high-level of machine precision and consistent behavior of rounding arithmetic. OS X as implemented on the 64-bit PowerPC chip provides this level of functionality, which in our experience is unsatisfactory on 32-bit Intel-based platforms running Linux. The production server is located at OFM Research Inc.