A Methodological Approach for Web Sites Reengineering

Research output: Other contribution

10 Downloads (Pure)

Abstract

Modern technologies allow web sites to be dynamically managed by building pages on-the-fly through scripts that get data from a database. Dissociation of data from layout directives provides easy data update and homogeneous presentation. However, many web sites still are made of static HTML pages in which data and layout information are interleaved. This leads to out-of-date information, inconsistent style and tricky and expensive maintenance. This talk presents a tool supported methodology to reengineer web sites, that is, to extract the page contents. All the pages that are recognized to express the same application (sub)domain are analyzed to derive their common structure. This structure is formalized by an XML document, called META, which is then used to extract an XML document that contains the data of the pages and a XML schema validating these data. The META document can describe various structures such as alternative layout and data structure for the same concept, structure multiplicity and separation between layout and informational content. XML schemas extracted from different page types are integrated and conceptualised into a unique schema describing the domain covered by the whole web site. Finally, the data are converted according to this new schema so that they can be used to produce the renovated web site. These principles will be illustrated through a case study using the tools that create the META document, extracting the data and the XML schema.
Original languageEnglish
Publication statusPublished - 2003

Fingerprint

Reengineering
XML
Websites
HTML
Data structures

Cite this

@misc{c1f714dff59e463eac43730c882eae86,
title = "A Methodological Approach for Web Sites Reengineering",
abstract = "Modern technologies allow web sites to be dynamically managed by building pages on-the-fly through scripts that get data from a database. Dissociation of data from layout directives provides easy data update and homogeneous presentation. However, many web sites still are made of static HTML pages in which data and layout information are interleaved. This leads to out-of-date information, inconsistent style and tricky and expensive maintenance. This talk presents a tool supported methodology to reengineer web sites, that is, to extract the page contents. All the pages that are recognized to express the same application (sub)domain are analyzed to derive their common structure. This structure is formalized by an XML document, called META, which is then used to extract an XML document that contains the data of the pages and a XML schema validating these data. The META document can describe various structures such as alternative layout and data structure for the same concept, structure multiplicity and separation between layout and informational content. XML schemas extracted from different page types are integrated and conceptualised into a unique schema describing the domain covered by the whole web site. Finally, the data are converted according to this new schema so that they can be used to produce the renovated web site. These principles will be illustrated through a case study using the tools that create the META document, extracting the data and the XML schema.",
author = "Fabrice Esti{\'e}venart and Aurore Francois and Jean Henrard and Jean-Luc Hainaut",
year = "2003",
language = "English",
type = "Other",

}

TY - GEN

T1 - A Methodological Approach for Web Sites Reengineering

AU - Estiévenart, Fabrice

AU - Francois, Aurore

AU - Henrard, Jean

AU - Hainaut, Jean-Luc

PY - 2003

Y1 - 2003

N2 - Modern technologies allow web sites to be dynamically managed by building pages on-the-fly through scripts that get data from a database. Dissociation of data from layout directives provides easy data update and homogeneous presentation. However, many web sites still are made of static HTML pages in which data and layout information are interleaved. This leads to out-of-date information, inconsistent style and tricky and expensive maintenance. This talk presents a tool supported methodology to reengineer web sites, that is, to extract the page contents. All the pages that are recognized to express the same application (sub)domain are analyzed to derive their common structure. This structure is formalized by an XML document, called META, which is then used to extract an XML document that contains the data of the pages and a XML schema validating these data. The META document can describe various structures such as alternative layout and data structure for the same concept, structure multiplicity and separation between layout and informational content. XML schemas extracted from different page types are integrated and conceptualised into a unique schema describing the domain covered by the whole web site. Finally, the data are converted according to this new schema so that they can be used to produce the renovated web site. These principles will be illustrated through a case study using the tools that create the META document, extracting the data and the XML schema.

AB - Modern technologies allow web sites to be dynamically managed by building pages on-the-fly through scripts that get data from a database. Dissociation of data from layout directives provides easy data update and homogeneous presentation. However, many web sites still are made of static HTML pages in which data and layout information are interleaved. This leads to out-of-date information, inconsistent style and tricky and expensive maintenance. This talk presents a tool supported methodology to reengineer web sites, that is, to extract the page contents. All the pages that are recognized to express the same application (sub)domain are analyzed to derive their common structure. This structure is formalized by an XML document, called META, which is then used to extract an XML document that contains the data of the pages and a XML schema validating these data. The META document can describe various structures such as alternative layout and data structure for the same concept, structure multiplicity and separation between layout and informational content. XML schemas extracted from different page types are integrated and conceptualised into a unique schema describing the domain covered by the whole web site. Finally, the data are converted according to this new schema so that they can be used to produce the renovated web site. These principles will be illustrated through a case study using the tools that create the META document, extracting the data and the XML schema.

M3 - Other contribution

ER -