Navigation on unmarked and possible poorly delineated roads where the boundaries between the road and the non-road surfaces are not clearly indicated is a particularly challenging task for autonomous vehicles. The results of this study show that fairly robust navigation strategies can be generated by a robot equipped with a dynamic active-vision based control system represented by an artificial neural network synthesized using evolutionary computation techniques. In the experiments described in this paper, a simulated Pioneer robot is required to visually navigate multiple poorly delineated roads that differ in terms of variations in luminance and/or chrominance between the road and the adjacent non-road areas. Low resolution camera images are processed by a mechanism that continuously adjusts the contribution of each component of a three dimensional colour model (e.g., R, G and B) to the generation of the robot perceptual experience. We show that the best controller can successfully drive a simulated Pioneer robot in environments with colour characteristics never encountered during the design phase, and operate with colour models never used during training. We show that the dynamic differential weighting of the colour components is underpinned by a complex pattern of neural activity that allows the robot to successfully adapt its perceptual system to the colour characteristics of different visual scenes. We also show that the controller can be easily ported onto real hardware, by showing the results of a series of tests with a physical Pioneer robot required to navigate various poorly delineated pedestrian roads.