Web Service Enabling Spatial Applications

Open Standards based for Web Services and Spatial

Some of the services that have been standardized
 – location services (routing, mapping, geocoding, directory services)
 – catalogue services (discovery, browse, and query against catalog servers)
 – map services (request/provide maps and info about the content of a map)

requirements
 – access/search/update/delete geo-spatial feature
 – access/update securely
 – manage feature privs at an instance level
 – real-time fransfer of feature instance using standards

approach
 – use SOAP for request/response
 – XML over HTTP Post method for request/response
 – Spatial for feature instance storage/retrieval
 – implement OGC filter specification for feature search
 – use WSS/LDAP for auth, row level security for instance-level priv mgmt and WSS for secure transfer of feature data
 – support publishing feature types from database data source (complex comumns, netsted table, XML types)
 – support publishing feature types from external data sources
 – implement token based locking to support WFS locking protocol to support long transaction model to artificially lock rows in the database
 – implement feature cache in middle tier to reduce spatial data xfer from DB to App server
 
WFS Operations
 – get capabilities
 – describe feature
 – transactional – getfeaturewithlock
                        – lockfeature
                        – insert, update, delete
the transaction and locking can span sessions. The only way to unlock this is through the WFS handler which is integrated into the OC4J/J2EE container. There is an expiration time that is default or specified by the client just in case the client goes away for a long time. This is done with triggers on tables/views to make sure that it is properly locked and the same client comes back with the proper token.

WFS Metadata
 – feature types, type tags, type attributes
 – complex types
 – spatial operators

there are two data sources that are supported – relational data type – PLSQL API and Java API done through XSD data types. The Java API is typically used to register feature types and feature type metadata

There two use cases – type supplier and type consumer

fully compliant with the WFS 1.0 spec

auth is done client -> oc4j using SOAP/WSS. We then use standard oc4j connections to connect to the database. This allows the use of VPD and user views on top of the database.

demo using OEM for App Server to configure the server. We define an application and deploy it using the application administration. When we define the app we also define the security service and the data sources. In this example we use a WFS connection. We also create a web service with the certificate and signature key definitions for security.

the client side is developed using JDeveloper. The feature type/app comes from an xml description of the spatial data type. We create an empty project, add a web server proxy using the WSDL file defined by the spatial service. At this point we have a non-secure connection and need to secure the connection. We go to the proxy and edit the security parameters. In this example we use a digest with no encryption (for demo purposes only) and a public keystore and private key of the client and server. At this point we have a secure proxy and just need a password for the signature key alias. To debug this exchange we run the http analyzer in JDeveloper so that we can see the request coming from the client and the answer from the server.

At this point we have some content that we wanted to publish. We define a JDeveloper client and link the security to that of the server security policies. We then ran a test to look at the connection and verify that it is working properly. At this point we can encrypt the data and turn off the html debugger.

The second demo is to use a relational connection to the data. To do this we use the map builder to load data and push our data into the spatial database. We do this by connecting to the WSL service and looking up the interfaces that we can use. We connect and look at the features, select a spatial column (GEOM) and correlate some element in the data (State Name) to the map. At this point we can create a query and return the map data with the state name placed into the state that the query finds.

OpenLS is a translation service that is done on the back end. This operates as expected according to the standard.

Catalog Services are a little more complex. The catalog request can be spatial or relational in nature. The catalog service server returns metadata associated with the item in question. To implement this it uses SOAP for request/response and XML over HTTP Post method for request/response. The difference is that ResultSet caching will be support to retrieve records from a single query across different web locations. This allows the user to scroll through related material that is close to the search object returned. The cache returned is currently not tunable and is handled by the server. Future releases will give hints to the cache service on what to populate.

web map service supports getcapabilities, getmap, and getfeatureinfo

this was an information packed presentation with lots of technical data. I recommend that you download the presentation once it is available. Unfortunately the best part of it was the demo which was a live demo of developing with JDeveloper and MapViewer.

caching query results in 11g

Cache consistency
 – consistency maintained by receiving notifications on subsequent roundtrip to server
 – in a lreatively idle client, cache can trail behind DB no more than CACHE_LAG milliseconds
 – changes invalidate affected cached results
 – cache bypassed if session has outstanding xactions on tables in query
 – free application from manually having to check for changes, poll the database, refresh result-sets
 – it is very difficult to program these changes because the database does not expose all the elements required.
 – you need the 11g client and the 11g server to make this work properly
 – you need to enable statement caching to make this work. It can be done in the client or on the mid-tier

OCI consistent client cache enabling
 – works with OCI-based drivers. Requires enterprise edition to make this work
 – on server set CLIENT_RESULT_CACHE_SIZE, must be non-zero upto 2G, CLIENT_RESULT_CACHE_LAG – 3000ms default. Setting LAG to zero disables lag
 – on client (set in sqlnet.ora)
    – OCI_RESULT_CACHE_MAX_SIZE (optional)
    – OCI_RESULT_CACHE_MAX_RSET_SIZE (optional)
    – OCI_RESULT_CACHE_MAX_RSET_ROWS (optional)
the client values override the server settings and can be done temporarily

the query requires /*+ result_cache */ hint in the code. This will be automated at a later date

look for candidate queries in AWR
 – frequent queries in Top SQL
 – identify candidate queries on r-o/r-mostly tables
 – sprinkle the hint on queries and measure
monitor usage
 – client_result_cache_stats$

Tom Kyte – How do you know what you know…

Tuesday Morning Keynote….. Tom Kyte

there have been 14 production releases of the database since I have been at Oracle. When a question comes up I have to think of the answer and think of what version the answer is correct for.

New ideas almost always come from those uncommitted to the status quo – from outsiders

educated incapacity – a barrier to creative ideas can be experience

assumptions – incorrect assumptions are barriers to creativity
                   – incorrect assumptions laed down the wrong roads

things like group by sorts the data. This assumption is wrong. order by sorts the data, group by sorts it by hash values. Emperical evidence is not always the correct view.

judgements – we tried that once…. it never worked….. the will never buy that

An exercise was done yesterday comparing a DBA with scripts against someone who is trained with the tools. The result was that the DBA with scripts could detect the problem quicker but could not fix the problem. The DBA with the tools was able to find the problem in a similar amount of time but the tool fixed the problem quicker.The focus for the tools that are being developed are to replace the mindane tasks, not to automate the function of the DBA.

Interesting comment. What I am good at today didn’t exist when I was 12 years old. It is important to remember that everything is transient. It is important to know the how and why and not the technology. The data is the important part that lasts, the process, application, and technology are what changes.