mainz
clone your own copy | download snapshot

Snapshots | iceberg

Inside this repository

README.md
text/plain

Download raw (2.1 KB)

# workshop.mainz

**Restyling any wikipedia articles (for print) with etherpad (collectively).**

a burgeon of code made by Amélie Dumont & Doriane Timmermans for a web2print workshop in _the University of Applied Science of Mainz_ that take place on the 1st of April 2023.


install requierement

    pip3 install requierement.txt


launch

    python3 app.py


you can request any wikipedia page by passing it's name in the url route as such

    http://127.0.0.1:5000/Mushroom


the associated pad is automatically created by clicking on the `specific pad` link

## setup documentation

This setup uses a [Flask application](https://flask.palletsprojects.com/en/2.2.x/), in order to dynamically request wikipedia pages using a python module.

Users can enter any route on their `localhost` (or being connected someone else `ip` through a local network).

    localhost:5000/python

The route is then interpreted as [Wikipedia Python API](https://wikipedia-api.readthedocs.io/) request to get in order to get a specific page, in this example we get back the HTML of the wikipedia page for `python`.

The styling of the article is entierly reset (removing inline `<link>` tags, and default browser styles with a `reset.css`).

You can collectivelly write new CSS styles in a dedicated pad for every different articles. Thoses pads are automatically generated simply by using the wikipedia route/request id in its url.

Furthermore a global pad is created, the CSS styles defined in it is applied to every wikipedia articles.

In order to use the content of pads as stylesheet, we need to do an ajax request and fill in `<style>` tags through a `data-pad` attribute, because some browser (?) show a mimetype restriction error (expecting `text/css` instead of `text/plain`).

To optimize this system, a cache system has been added. So the wikipedia request can only be done one time for every page.

this is what we can get from the wikipedia api:

<!-- https://wikipedia-api.readthedocs.io/en/latest/API.html?highlight=WikipediaPage#wikipediapage -->
https://wikipedia.readthedocs.io/en/latest/

## todo

* local cache system for wikipage
* not found error template page