111111 Algorithmic Drive

Algorithmic Drive

Algorithmic Drive

Since the summer of 2017, artist François Quévillon has been collaborating with NECOTIS member Etienne Richan on his latest work : Algorithmic Drive. “It is part of a corpus that focuses on technologies used by autonomous vehicles and compilations of dashcam commonly found on the web. The device puts the programmed behavior of autonomous robotics in justaposition with the unpredictable nature of our world.”¹

The audiovisual installation presents thousands of videoclips selected from two years of dash cam footage captured by François as he drove through Canada and the US, through various seasons and weather conditions. Spectators are invited to explore a common experience (driving) from the perspective of the information recorded by a computer. Computer vision, audio analysis and signal processing techniques are used to extract descriptive features for each clip. An endless sequence of clips is assembled from videos matching these features as they fluctuate over time.

ConduiteDaimon_6011_2400

A graphical interface allows the exploration of this bank of videos by specifying the desired values of certain features. The interface is composed of a screen displaying the audio and video data used in the analysis and a row of rotary knobs for controlling the desired value for each feature. For example, by turning the knob labeled Vehicle speed to its lowest setting, the sequence of videos will change to ones where the car is stationary. Turning the knob to its maximum will send the viewer barreling down the passing lane of a highway.DriverlessCarAfterlife2400-768x662

A total of 21 features are available to explore the audio, visual and vehicular modalities. The audio features allow one to specify parameters such as the loudness or pitch of the sounds captured in the recording, while the vehicular features are related to the car’s sensors, such as the ambient temperature, the engine’s RPM or the bumpiness of the road. The visual features are primarily based on semantic segmentation of video frames by SegNet². This deep learning model produces a coloured label for each pixel in an image. Each colour corresponds to a semantic category such as tree, vehicle, road, building, pedestrian, etc. Turning the knob associated with the tree category will play videos where the field of view is dominated by vegetation. This can be combined with a low value of the loudness feature to find quiet scenes in nature. Inversely, by maximizing loudness and minimizing tree, one finds oneself in a noisy urban environment.

The work was first shown to the public during a residency in October at Daïmon in Gatineau. Subsequently, it was presented at Sporobole in Sherbrooke from november 5th to 25th.

Other works by the artist, including previous collaborations with other NECOTIS members can be viewed on François’ website : francois-quevillon.com


View this post on Instagram

A post shared by François Quévillon (@fquevillon) on

Upcoming exhibitions:

ConduiteDaimon_8997_2400-768x525

 

Artiste en résidence à Avatar
Avatar, Quebec (QC), Canada.
Résidence du 8 au 22 janvier 2019.
Présentation le 22 janvier 2019.
Commissaire : Eric Mattson.

Exposition Manœuvrer l’incontrôlable au centre Expression
Expression, Centre d’exposition de Saint-Hyacinthe (QC), Canada.
Exposition du 9 février au 21 avril 2019.
Vernissage le 9 février 2019 à 14h.

 

Credits:

Software development : Etienne Richan
Production of the physical controller : Artificiel (Alexandre Burton and Samuel St-Aubin)

Thanks to  the Canada Council for the Arts and the Conseil des arts et des lettres du Québec for their support.

References:

[1] Translated from the Algorithmic Drive web page. http://francois-quevillon.com/w/?p=1470

[2] Alex Kendall, Vijay Badrinarayanan and Roberto Cipolla “Bayesian SegNet: Model Uncertainty in Deep Convolutional Encoder-Decoder Architectures for Scene Understanding.” arXiv preprint arXiv:1511.02680, 2015.