Converting citygml to KML?

Converting citygml to KML?

I am trying to render citygml files. I am using NWW for rendering purpose. I have used citygml4j library to extract the GroundSurface, roof, and walls geometry, and was able to render them. But due to the huge data specification of citygml, I am having troubles with more complex files. So I am looking for some java library that can convert citygml file to kml data format, which can be easy visualized in World wind. I have seen the 3dcitydb and found that pretty cool, but couldn't find any set of apis that can be used for my purpose.(haven't completely explored that).

Does there exist some java library that can fulfill my purpose?

First, the most important fact! CityGML is not a rendering format. It is designed to exchange information which explains its complexity.

If you need KML you can use the already mentioned 3DCityDB. It has an export plugin to KML and COLLADA. At the moment many developers are going for COLLADA which you can convert into glTF and throw to Cesium.

You can also work directly with the PostGIS database. It has got functions like AsKML or AsX3D you might use.

Utilities of Virtual 3D City Models Based on CityGML: Various Use Cases

Virtual 3D city models are increasingly being used to model the realms of the real world for utilization in a number of applications related to environmental simulations including, urban planning, mapping the energy characteristics of buildings, noise mapping, flood modelling, etc. Apart from geometric and appearance/textural information, these applications have a requirement for complex urban semantics. Currently, a number of 3D standards are available in CAD, BIM and GIS related domains for the storage, visualization and transfer of 3D geospatial datasets. Initially, the 3D data models (such as COLLADA, VRML, X3D, etc.) were purely graphical/geometrical in nature and mainly used for visualization purposes. With the inclusion of thematic modules in OGC CityGML, the integration of geometry and semantics in a single data model paved the way for better sharing of virtual 3D city models. In spite of the availability of a wide range of 3D data standards, there are certain differences with respect to geometry, topology, semantics, LODs, etc., which complicates the integration of 3D geodata from heterogeneous sources. This paper serves to highlights the need for the innovative solutions with respect to the urban environmental related simulations primarily based on the use of virtual 3D city models. Four use cases are studied in this context namely, (1) urban solar potential estimation using CityGML models, (2) simulation of traffic noise level mapped on building walls from the urban road segments, (3) CityGML based 3D data models interoperability, and (4) 3D indoor logistics and subsurface utilities. However, for modelling majority of use cases, CityGML does not provide explicit thematic representations but provides support for extending the CityGML schema using Application Domain Extensions. In a nutshell, the study explores the semantic modelling capabilities of the CityGML for the transformation of native 3D virtual city models to one satisfying capabilities like semantic information and support towards interoperability.

This is a preview of subscription content, access via your institution.

A BIM-GIS Integrated Web-based Visualization System for Low Energy Building Design ☆

A building's energy performance is significantly influenced by its external built environ-ment. However, in the current design practices, energy design and simulation of a building are often conducted independently from the surrounding buildings, and the energy landscape of the community or city. There is a gap between low energy building design and urban-level energy planning. The aim of this paper is thus to facilitate collaborative work between the two processes of urban-level energy planning and building-level energy design by adopt-ing a holistic energy design approach. This paper develops a BIM-GIS integrated web-based building energy data visualization system. The use of this system contextualizes low energy building design in the urban energy landscape. The paper also details the solutions for data conversion that that is the main challenge for integrating BIM and GIS and describes the visualization mechanisms.

GLTF and WGS84 or ECEF #16

Quick question, I have 3DCityDB exports working nicely now, and I've tested imports in EPSG:3857 and EPSG:4326. I'm doing large scale imports from OSM (eg, state-wide and larger) so I'm looking to use EPSG:4326 for accuracy. EPSG:3857 works for KML+GLTF exports but of course is distorted in Cesium.

I've done KML extracts in EPSG:4326 which look okay, but are very slow to render in-browser. Unfortunately exports in the GLTF format don't show up. Is there a fundamental reason why GLTF doesn't show up? I realize it may have to do with showing the vertices in degrees, but I didn't think this was a limitation of GLTF.

The text was updated successfully, but these errors were encountered:

We are unable to convert the task to an issue at this time. Please try again.

The issue was successfully created but we are unable to update the comment at this time.

Stirringhalo commented Sep 29, 2016 •

In digging deeper into the spec, I've discovered I was mistaken and that units in meters are required by spec. I assume the requirement is that the X,Y,Z coords are relative to ground, so not ECEF.

I'm wondering if there's an easy way around this, my probably naive thinking is as follows:

1. Import in EPSG:4326 (and tagging it as such in the DB postgis srid)
2. Populate a column of cityobject table with the center (or use the center of the envelope for speed)
3. When an export occurs (to gltf files or through WFS), convert the lat+long+z to meters from pre-calculated center, and write to gltf. Add in the CESIUM_RTC extension and define the center to be the one previously calculated.

Edit: Naive thinking was wrong

Stirringhalo commented Sep 29, 2016

Actually, can't populate a column as the RTC would be different depending on tile size and is the center of multiple objects. So modified:

  1. Import in EPSG:4326 (and tagging it as such in the DB postgis srid)
  2. When an export occurs (to gltf files or through WFS), calculate the center of the tile and add in the CESIUM_RTC extension with that tile center (in degrees). Vertices would then become the x,y,z components of the line connecting the center to the tile to the vertice.

Stirringhalo commented Oct 5, 2016

I've asked on!topic/cesium-dev/t6VgLN5wtVI and was told GLTF supports ECEF coordinates, which would probably satisfy the need of low distortion for placing GLTF models.

I did an import with ECEF coordinates instead of EPSG:4326, it didn't help and the tile was still missing (but the extent was calculated correctly)

Is there something missing/or an assumption in the 3dwebmap client that prevents the use of ECEF coordinates?

Sorry for all the questions, this is unfortunately a major roadblock issue for me

Yaozhihang commented Oct 5, 2016

the KML/COLLADA/glTF-Exporter has been implemented based on the assumption that the imported CityGML datasets shall use a projected or a 3D compound spatial reference system, because according to the best of our knowledge, none of the existing official CityGML data employed by the cities or mapping agencies worldwide are using the geographic coordinate reference system (e.g., EPSG: 4326). Thus, if you import a CityGML dataset with EPSG:4326 into the 3DCityDB. the KML/COLLADA/glTF-Exporter may sometimes produce unexpected results when exporting KML/COLLADA/glTF models.

Besides, according to the glTF spec, the exported glTF model vertices are defined in a local space which is a 3D cartesian coordinate system. Here, we simply use the KML <Model> element to store the location (lat, lon, alt) and orientation of the reference point in world space for each building object, instead of using the CESIUM_RTC extension. The 3dcitydb-web-map is able to parse the KML <Model> element to retrieve the ECEF coordinates of each building object and place it on the Cesium Virtual GLobe.

The New CityGML 3.0 Core Module

The CityGML 3.0 Space Concept

In CityGML 3.0, a clear semantic distinction of spatial features is introduced by mapping all city objects onto the semantic concepts of spaces and space boundaries. A Space is an entity of volumetric extent in the real world. Buildings, water bodies, trees, rooms, and traffic spaces, for instance, have a volumetric extent. Hence, they are modelled as spaces or, more precisely, as specific subclasses of the abstract class Space. A Space Boundary is an entity with areal extent in the real world. Space Boundaries delimit and connect Spaces. Examples are the wall surfaces and roof surfaces that bound a building the water surface as boundary between the water body and air the road surface as boundary between the ground and the traffic space or the digital terrain model representing the space boundary between the over- and underground space.

To obtain a more precise definition of spaces, they are further subdivided into physical spaces and logical spaces. Physical spaces are spaces that are fully or partially bounded by physical objects. Buildings and rooms, for instance, are physical spaces as they are bounded by walls and slabs. Traffic spaces of roads are physical spaces as they are bounded by road surfaces against the ground. Logical spaces, in contrast, are spaces that are not necessarily bounded by physical objects, but are defined according to thematic considerations. Depending on the application, logical spaces can also be bounded by non-physical, i.e. virtual boundaries and they can represent aggregations of physical spaces. A building unit, for instance, is a logical space as it aggregates specific rooms to flats, the rooms being the physical spaces that are bounded by wall surfaces, whereas the aggregation as a whole is being delimited by a virtual boundary. Other examples are city districts which are bounded by virtual vertically extruded administrative boundaries public spaces vs. security zones in airports or city zones with specific regulations stemming from urban planning. The definition of physical and logical spaces and of corresponding physical and virtual boundaries is in line with the discussion in Smith and Varzi (2000) on the difference between bona fide and fiat boundaries to bound objects. Bona fide boundaries are physical boundaries they correspond to the physical boundaries of physical spaces in CityGML 3.0. In contrast, fiat boundaries are man-made boundaries they are equivalent to the virtual boundaries of logical spaces.

Physical spaces, in turn, are further classified into occupied spaces and unoccupied spaces. Occupied spaces represent physical volumetric objects that occupy space in the urban environment. Examples for occupied spaces are buildings, bridges, trees, city furniture, and water bodies. Occupying space means that some space is blocked by these volumetric objects for instance, the space blocked by the building in Fig. 2 cannot be used any more for driving through this space or placing a tree on that space. In contrast, unoccupied spaces represent physical volumetric entities that do not occupy space in the urban environment, i.e. no space is blocked by these volumetric objects. Examples for unoccupied spaces are building rooms and traffic spaces. There is a risk of misunderstanding the term OccupiedSpace. However, we decided to use the term anyway, as it is established in the field of robotics for over three decades (Elfes 1989). The navigation of mobile robots makes use of a so-called occupancy map that marks areas that are occupied by matter and, thus, are not navigable for robots.

Occupied and unoccupied spaces

Semantic objects in CityGML are often composed of parts, i.e. they form multi-level aggregation hierarchies. This also holds for semantic objects representing occupied and unoccupied spaces. In general, two types of compositions can be distinguished:

Spatial partitioning Semantic objects of either the space type OccupiedSpace or UnoccupiedSpace are subdivided into different parts that are of the same space type as the parent object. Examples are Buildings that can be subdivided into BuildingParts, or Buildings that are partitioned into ConstructiveElements. Buildings as well as BuildingParts and ConstructiveElements represent OccupiedSpaces. Similarly, Roads can be subdivided into TrafficSpaces and AuxiliaryTrafficSpaces, all objects being UnoccupiedSpaces.

Nesting of alternating space types Semantic objects of one space type contain objects that are of the opposite space type as the parent object. Examples are Buildings (OccupiedSpace) that contain BuildingRooms (UnoccupiedSpace), BuildingRooms (UnoccupiedSpace) that contain Furniture (OccupiedSpace), and Roads (UnoccupiedSpace) that contain CityFurniture (OccupiedSpace). The categorization of a semantic object into occupied or unoccupied takes place at the level of the object in relation to the parent object. A building is part of a city model thus, first of all it occupies urban space within a city. As long as the interior of the building is not modelled in detail, the space covered by the building needs to be considered as occupied and only viewable from the outside. To make the building accessible inside, voids need to be added to the building in the form of building rooms. The rooms add free space to the building interior, i.e. the OccupiedSpace contains now UnoccupiedSpace. The free space inside the building can, in turn, contain objects that occupy space again, such as furniture or installations. In contrast, roads also occupy urban space in the city however, this space is initially unoccupied as it is accessible by cars, pedestrian, or cyclists. Adding traffic signs or other city furniture objects to the free space results in specific sections of the road becoming occupied by these objects. Thus, one can also say that occupied spaces are mostly filled with matter whereas, unoccupied spaces are mostly free of matter and, thus, realise free spaces.

The classification of feature types into OccupiedSpace and UnoccupiedSpace also defines the semantics of the geometries attached to the respective features. For OccupiedSpaces, the attached geometries describe volumes that are (mostly) physically occupied. For UnoccupiedSpaces, the attached geometries describe (or bound) volumes that are (mostly) physically unoccupied. This also has an impact on the required orientation of surface normals for attached thematic surfaces. For OccupiedSpaces, the normal vectors of thematic surfaces must point in the same direction as the surfaces of the outer shell of the volume. For UnoccupiedSpaces, the normal vectors of thematic surfaces must point in the opposite direction as the surfaces of the outer shell of the volume. This means that from the perspective of an observer of a city scene, the surface normals must always be directed towards the observer. In the case of OccupiedSpaces (e.g. Buildings, Furniture), the observer must be located outside the OccupiedSpace for the surface normals being directed towards the observer whereas in the case of UnoccupiedSpaces (e.g. Rooms, Roads), the observer is typically inside the UnoccupiedSpace.

The classification into OccupiedSpace and UnoccupiedSpace might not always be apparent at first sight. Carports, for instance, represent an OccupiedSpace, although they are not closed and most of the space is free of matter, see Fig. 3. Since a carport is a roofed, immovable structure with the purpose of providing shelter to objects (i.e. cars), carports are frequently represented as buildings in cadastres. Thus, also in CityGML, a carport should be modelled as an instance of the class Building. Since Building is transitively a subclass of OccupiedSpace, a carport is an OccupiedSpace as well. However, only in LOD1, the entire volumetric region covered by the carport would be considered as physically occupied. In LOD1, the occupied space is defined by the entire carport solid (unless a room would be defined in LOD1 that would model the unoccupied part below the roof) whereas in LOD2 and LOD3, the solids represent more realistically the really physically occupied space of the carport. In addition, for all OccupiedSpaces, the normal vectors of the thematic surfaces like the RoofSurface need to point away from the solids, i.e. consistent with the solid geometry.

Representation of a carport as OccupiedSpace in different LODs. The red boxes represent solids, the green area represents a surface. In addition, the normal vectors of the roof solid (in red) and the roof surface (in green) are shown

In contrast, a room is a physically unoccupied space. In CityGML, a room is represented by the class BuildingRoom that is a subclass of UnoccupiedSpace. In LOD1, the entire room solid would be considered as unoccupied space, which can contain furniture and installations, though, as is shown in Fig. 4. In LOD2 and 3, the solid represents more realistically the really physically unoccupied space of the room (possibly somewhat generalised as indicated in the figure). For all UnoccupiedSpaces, the normal vectors of the bounding thematic surfaces like the InteriorWallSurface need to point inside the object, i.e. opposite to the solid geometry.

Representation of a room as UnoccupiedSpace in different LODs. The red boxes represent solids, the green area represents a surface. In addition, the normal vectors of the room solid (in red) and the wall surface (in green) are shown

GeoWeb – Part II – GML and KML

So what is the difference between GML and KML (Google)? I think one the big differences is that there is a big well known styling engine for KML called Google Earth. If I create some GML data – what then? Well I have to have a style engine and a rednering engine to make a map. Is that difficult? Nope – but it is not no work either. To get a style engine is NOT so hard – just any sort of XSLT engine will do the trick. And for rendering the output of the styling process (typically SVG) you need something like the Adobe plugin for SVG or the Batik libraries to turn it into an image (PNG, TIF, JPEG). Note that SVG allows image underlays – so that part is easy too. Google Earth provides you a ready to hand set of images (of their choosing) with Global coverage. So am I saying I can make my own Google Earth. Yes – I guess that is what I am saying. Here is the basic recipe. Let's leave the Google Earth images out of it for the moment.

The Need for Comprehensive 3D City Models (Part 1)

At PenBay, we have spent a significant amount of time over the past several years working on ways to model the insides of buildings in GIS. I have written repeatedly about the subject and it is an area that continues to fascinate me. On my recent trip to Vancouver to speak at the GeoWeb 2009 conference, however, I was inspired by Thomas Kolbe’s work on CityGML to think more about collections of buildings and how they work together in an urban environment. As we move to this city and regional scale, the level of granularity at which we model our buildings has big implications on scalability, performance, and the tool sets that we use for visualization and analysis. For the purposes of our discussion here, let’s define a “City” is a reasonably large collection of buildings in a condensed area. This city might be a traditional municipality like Philadelphia or Chicago, it might be a military city like Langley Air Force Base, or it might be a college campus like Boston University.


Project 2TaLL required specialist training and consultations aimed at broadening knowledge and competences of project team members. A milestone development included a training on the CityGML standard and FME software organized by Virtual City System in Berlin (July 2014). The training helped developing skills of using CityGML models and enabled creating programs by project team members to process such data (crucial for e.g. 3d-negative). Other training and consultations were held in Poland and focused on GIS techniques, programing and mathematics. The transfer of knowledge enabled developing the capacity of the team, new analytical methods and computer applications (C++). The process helped verifying suitability of city landscape analysis systems (e.g. ESRI).

The basic CityGML seminar from virtualcitySYSTEMS provides an introduction to the international OGC CityGML standard. It covers the basics for understanding CityGML and explains the role of CityGML within 3D GIS applications and processes. Based on the content of the basic seminar, we offer an in-depth expert seminar on the topic of CityGML. The seminar deals respectively with current topics from the CityGML practice and works through their theoretical fundamentals in detail. Where possible, the seminar participants are introduced to the practical implementation of topics on the basis of accompanying exercises.

BASIC CityGML Training
Urban Information Modelling with CityGML
* What is urban information modeling?
* Introduction to GML 3 and ISO 19100 standards group
* Basic concepts and thematic modules in CityGML 2.0
* Modeling of buildings, terrain, and additional CityGML feature types
* Multi-scale modeling, Geometric-topological modeling with GML 3 and CityGML
* Surface properties, Implicit geometries
* Extension mechanisms
Relationship of CityGML to other 3D data formats
* CityGML and visualization formats, such as KML/COLLADA or X3D
* CityGML and CAD/BIM
Relationship of CityGML to INSPIRE
Current applications of CityGML in practice and research
* Examples from areas, such as environmental and energy planning, noise simulation, disaster management or city planning

CityGML Processing with FME
The FME technology from Safe Software Company has established itself well as a standard tool for dealing with ETL processes (extract, load, transform), for handing GIS data and for supporting far more than 300 conventional GIS data formats in addition to CityGML in its current version. FME, therefore, offers its services for the creation, processing, and transformation of CityGML data from various data sources as well as for the provisioning of CityGML data in various target data formats. Our seminar provides an extensive introduction into the processing of CityGML with FME. In addition to basic processes for reading and writing CityGML data and their practical implementation, accessing to the 3D City Database forms is a significant point of interest in focus of the seminar. The training begins with an introduction into the CityGML data model and 3D data processing with FME. Reading and writing of CityGML with FME is introduced and exercised using practical examples. The focus is on producing valid building data from heterogeneous input data. In addition, other object classes of objects, such as terrain, public amenities, and vegetation objects are dealt with as well. On the second day, the emphasis is placed on connecting databases as data sources and how to directly access to the CityGML database “3DCityDB”.
Training components:
* 3D data formats and dealing with 3D geometry
* Basics of reading and writing of CityGML
* Converting 3D data formats in CityGML
* Converting CityGML into other 3D formats
* Database access with FME
* Manipulating and extracting objects from the 3DCityDB

05.2014 – 02.2016 SZCZECIN | IT consultations
inż. Maciej Berdyszak, mgr inż. arch. Maciej Jarzemski

Public Safety

In my mind, I find it helpful to define public safety differently from security although the worlds often overlap. I think about security as being primarily concerned with preventing bad things from happening. Public safety on the other hand is primarily concerned with responding after something bad has already happened. Classic public safety agencies, therefore, would be the fire department and emergency medical services. These agencies are often concerned with locations of potentially dangerous items that may exist in an area that they will need to operate in an emergency situation. The locations of hazmat closets and propane tanks are of particular interest to fire departments for example. Fire departments are also very interested in the locations of available fire response equipment like fire extinguishers and stand pipes inside a building and how a given building or set of buildings relates to the available water supply in the surrounding area. Like the security community, public safety personnel are also very interested in understanding how to manage the local transportation infrastructure in the case of an emergency. If they have to evacuate 1500 people from a high-rise building in downtown Manhattan, where do they put those people? Where do they set up triage centers? How do they control access and egress from their operational area? How do they gain access and egress from the buildings that they need to operate in? How do they understand the nature of the population that might be resident in the building at the time of the event?

Space Management

Space Management is primarily the concern of the building owner or occupier. Space managers are interested in understanding the form, function, assignment, and availability characteristics of their space in 3-D overtime. They are also interested in monitoring and managing various performance metrics of their spaces both individually and collectively. Performance metrics such as cost per square foot, energy consumption per square foot, occupancy rates, and personnel density help the space manager optimize the use of their occupied space. Often a space manager will rely on a computer aided facilities management (CAFM) system such as Archibus or Centerstone to support workflows related to move management, room reservations, lease administration, etc. Being able to share geographic information with other facilities management information across system boundaries is a critical requirement of the space management community.

Commercial Real Estate

The commercial real estate community shares a number of the concerns of the space management community but at a slightly less granular scale. Commercial real estate portfolio managers are often interested in understanding their portfolios at the suite or building level rather than at the individual room level. That said, they share a requirement to be able to visualize occupancy rates and other portfolio performance metrics in four dimensions across their portfolio holdings and are often interested in how their portfolio relates to the demographics of the surrounding area.

Public Administration

There are a number of public administration agencies that have interests inside the building. Some of these agencies are interested in regulating land-use entitlement. Their interests are in understanding permitted uses of buildings in three dimensions over time. They will also have a requirement to administer local taxation policy in three dimensions. Other agencies will be concerned with regulating certain activities inside buildings. There are whole host of inspection workflows administered by your typical city administration from restaurant inspections to day care inspections to fire safety inspections and many others. All of these inspection workflows require a basic understanding of the layout of the interior of the building along with the locations of certain domain specific elements. Depending on the workflow, those elements might include exhaust hoods, toilets, sprinkler systems, stand pipes, etc.

Facilities Management

For the purposes of our discussions here, I use the term facilities management to mean the maintenance activities required for a collection of buildings. Facilities management personnel are primarily concerned with the existing condition of the buildings under their control and the locations of anything that might require scheduled or unscheduled maintenance. Facilities management personnel often use some sort of work order management system like SAP or IBM Maximo to help organize and document their work. For this community, the ability to interchange geospatial locations of maintainable assets across system boundaries is a critical requirement.

Environmental Monitoring / Public Health

The environmental health and human safety community is concerned with monitoring environmental quality inside and outside buildings across the landscape. They will often use a combination of stationary and mobile environmental quality monitoring systems to collect, analyze, and store a variety of environmental quality samples. This community is primarily concerned with understanding the distribution of various contaminants throughout the urban environment both indoors and outdoors over time. They have a need to understand how various factors from blasts to wind to rain might affect dispersion patterns of contamination and to predict under what conditions that contamination might become harmful to human health.

Energy Management

Energy Management is becoming a critical concern for all thoughtful urban designers. As our global population becomes increasingly urbanized, the vast concentrations of buildings (the most dramatic energy consumers on the planet) in our cities greatly concentrates energy consumption in our urban centers and is putting increasing stress on our energy distribution infrastructure and is dramatically driving up the global demand for energy in all forms. In response, campus managers and urban planners are becoming increasingly proactive in monitoring energy consumption on a per building and sometimes a per space basis over time. University campuses are becoming particularly proactive on issues of energy consumption. Many universities have developed policies that prescribe compliance for all university owned buildings with LEED standards for energy efficiency. At the individual building scale, many facilities managers are starting to be much more proactive about using their smart building control systems from Honeywell or Johnson Controls to drive down energy consumption on a per room basis across their portfolio. At a city scale some municipalities, particularly in Europe, are undertaking wholesale infrared imagery collection efforts so that they can identify the most egregious energy inefficiencies in their cities and target those buildings for energy conservation efforts.

3D world in browser. just like that

You can get the pdf version with images here

The world is xmlizing. I say. Every data type on the web got to be in XML. That is good, that means open data, human readable data, more accessible for society in all fields, wherever XML is used. XML is ubiquitous these days.
3D in internet has a long and interesting story, it has started with VRML technology. Plenty of companies were making their own plug-ins, selling, showing their enormous functions, but there was no standard. It was hard to think of it, while the OpenGL support was barely standing and HTML was exploding on the net. People started talking how buggy and not efficient that kind of solution is. So VRML stayed in a shadow. That was years ago, some people had this dream about 3D worlds in a browser, true 3D API with decent data for it. Years passed and a successor of VRML came into stage, X3D.
X3D is a ratified standard from Web3D Consortium, it grew up to become next child to fight its way with browsers. X3D is XML based, it includes old parts of VRML such as H-Animation, GeoVRML or Nurbs. It replaces VRML while providing compatibility which VRML existing content (VRML content can be read by X3D reading applications). It was developed so that it is extensible and shrinkable as well. It consists of profiles, which include components. Components are collections of specific features. For instance we have Geo component. The reason of such a construction here is that some companies wouldn’t like to support all of the features available with X3D, so they shrink X3D to their specific X3D data. They can also create new profiles and components. The solution is universal as you can see. There is another very popular XML data for 3D representation – the Collada file (DAE extension). They are very similar as it comes for construction. Both the Web3D and Khronos Group (actual owner of Collada - originally was made by Sony) are collaborating so that they can specify fields where each of them will find its piece of land, they also want to make some of their parts the same (animation etc.) so that the open source society will have more profits. So if both are XML can we. yes, we can make translations between them. There are XSLT transformations available, free of charge of course. However, in big outline. Collada is considered as a interchange format for standalone DCC (Digital Content Creation) applications, while X3D is focused on web issues. If you want to have a broader view on X3D/Collada there’s a fresh whitepaper from Tony Parisi [1] on this subject. So we have X3D, why I find it so important for GIS Web 3D. Primarily, because it is a standard, a good stable standard, with really some time spent on developing its flexibility. Secondly it has a working group X3D Earth working group which vision is to create X3D representation of the world, I know that’s a big challenge, but they have some serious reinforcements as well (NASA). Anyway, there are also some other non-standard technologies that have something to say if it comes to GIS data. There is very nice cityGML which has some cities already implemented in it (, but it’s not yet a standard. At the moment it’s on its way of achieving that with Open Geospatial Consortium. However for web purposes X3D should take the lead. Heh, again cityGML is XML so yes, again XSLT to X3D is possible. More on that topic is covered in Kiwon Lee document[2] about literally speaking cityGML to X3D transformations and mobile GIS 3D. So we are here at the point where we have data type X3D which is supporting our GIS and what is more is web focused, but what about the viewer?
The Web3D Consortium provides us with some free X3D viewers. But all of them are required as new plug-ins for our browsers in case we want to view something, and that’s a bit of a problem. Because a good stable player is also very important if it comes to Internet society, in some way standardized player. I know that if someone really would like to use 3D in web he would download the player and play it. But as far as I know reality, people are not keen on installing new plug-ins, most of them don’t know how to do it (despite it is automated these days). That’s life. My point here is that the 3D that we have to offer should be playable just like that, when opening the web page. Like google maps, that’s why they are so popular. There is NASA WorldWind and Google Earth as well (the standalones), but the usability is what counts nowadays and what makes people speak about it. The practical usage of solution.
There is one plug-in however that is available among society, yes, exactly. Flash Player. Everyone has Flash Player. To be more concrete I will recall the chart from (Fig.1)

Fig. 1 : The Adobe Flash Player statistics show that Flash content reaches more than 98% of users.

Yes I know that can be a bit exaggerated (those are Adobe statistics), but It’s hard to admit that the Flash Player is not ubiquitous.
So Flash, Flash, wasn’t that just for 2D vivid graphics? Yes it was, it is, and it’s evolving. But in the mean time we want to get started here with new Adobe Flex 2.0 technology which also uses Flash Player as a run-time environment. I gave you before an example what Flex ArcWeb Explorer is, and how ESRI was fascinated about it. Imagine now connecting 3D options to it. Well I believe it’s fully applicable. As I said before, we want to create very portable and practical solution. The point I’m making here is that in my opinion the X3D, Flex + Papervision3D bundle gives as a gate to develop really nice looking environments in a really fast way. There are many gaps still to be filled with some coding, but the outline is there, steady and ready. Ok so are we ready to show some graphics? what else do we need? . Rationality! and again the practical attitude. I’m going to keep repeating that till the end of my days. The public destination of that solution and the availably for everyone is a really important issue. People are creating standards these days, not companies. So which part of our gate needs to be replaced. what about X3D, do we have some data implemented in it? is it really in practical, wide usage? No it isn’t. My sentiment about X3D is similar to this expressed by French president Charles De Gaulle about Brazil: It has a great future … and always will. We need something practical, what is already in wide use, and its growing in the hand of the users, and moreover we need it now. X3D is supported by NASA and it maybe going to be a standard but nowadays we need to pick up something reasonably, that will strongly support us with existing data. The best way to see what I mean is to compare NASA World Wind and Google Earth. Google Earth is starting to be created by users for users, and that is what I call a standard. So what I advise to myself, is to use the kind of data that is using Google Earth to represent environment. So here we are now with KML (it seems that we are going through all the alphabet +ML extension). You see, the easiness with I am switching the data types is also because of the fact that all of them are XML based and its possible to make translations between them, we are not risking much. So again,my target is the type of that data that will be most suitable for wide audience providing ease of use and data creation perspectives. So what about this KML you ask?

KML stands for KeyHole Markup Language. The word Keyhole is an earlier name for the software that became Google Earth. The software was produced in turn by Keyhole, Inc, which was acquired by Google in 2004. The term "Keyhole" actually honors the KH reconnaissance satellites, the original eye-in-the-sky military reconnaissance system now some 30 years old. At the moment it is a standard of representing data for Google Earth apps. It is constructed with really simple structure. Let’s say we want to mark a point on Google maps. The KML file would be as follows :

That gives us points, nothing special. But the KML can support us with polygons descriptions :

Fig. 2 : Different types of polygons in KML, opened and placed on a specific location in Google Earth.

What is more, we can use Google Earth to create that kind of polygons using real world raster map as a drawing layer. So that way we can acquire polygon boundaries created on real world blue prints with real world coordinates. That is kind of fun. I wouldn’t lie if I say that starting from now, with no experience in Google Earth, I can create in few hours an outline of a small city and export it to KML files, or create KMZ archive (collection of KML files). Just like that, that is what I call simplicity. Ok let’s go further. Starting from KML version 2.1 Google made a big step towards 3D worlds, we have a support for intermediate 3D modeling file format, guess… yes, our gaming friend Collada (thought about X3D, huh?).

So… plenty of new possibilities now open. Game creators can move their actual work to Google Earth engine. Anyone can model buildings in his favorite application save it to Collada file and place it in the Earth on a specific coordinates. I don’t have to tell what kind of options it gives. Can you imagine games in real 3D environment or millions of people creating 3D models of their homes? Well, it’s going to happen faster than we think, I think. In the mean time… where were we with our lounge of chosen technologies?

Fig. 3 : The KML file with a Collada .dae model attached to it, opened in Google Earth.

Our gate for creating 3D worlds, based on true Geographical data, open for everyone in a common browser is : Flex with PaperVision3d + KML file format which includes Collada files. In my opinion that could be a really interesting solution solver for Web based GIS 3D. It is really easy to develop, and if you take a deeper insight on options which each of these technologies give, you will see that you can create some really fresh outlook of content in a web browser… just like that.

References :
[1] ”Developing Web Applications with COLLADA and X3D” by Dr. Rémi Arnaud and Tony Parsi
[2] “Prototype Implementations for Multi-typed 3D Urban Features
Visualization on Mobile 3D and Web 3D Environments “ by Lee Kiwon