Modern technologies allow web sites to be dynamically managed by building pages on-the-fly through scripts that get data from a database. Dissociation of data from layout directives provides easy data update and homogeneous presentation. However, many web sites still are made of static HTML pages in which data and layout information are interleaved. This leads to out-of-date information, inconsistent style and tricky and expensive maintenance. This talk presents a tool supported methodology to reengineer web sites, that is, to extract the page contents. All the pages that are recognized to express the same application (sub)domain are analyzed to derive their common structure. This structure is formalized by an XML document, called META, which is then used to extract an XML document that contains the data of the pages and a XML schema validating these data. The META document can describe various structures such as alternative layout and data structure for the same concept, structure multiplicity and separation between layout and informational content. XML schemas extracted from different page types are integrated and conceptualised into a unique schema describing the domain covered by the whole web site. Finally, the data are converted according to this new schema so that they can be used to produce the renovated web site. These principles will be illustrated through a case study using the tools that create the META document, extracting the data and the XML schema.
|Publication status||Published - 2003|