Collecting files from org-reveal presentations

I use org-mode to write my presentation slides both for lectures and for teaching. Often I use reveal.js for exporting them to HTML. This gives me good looking sites that are easy to distribute on my web page. But, since I'm not always completely consistent with where I keep my files (pictures and videos and stuff used in the presentation), and because I don't like having copies of duplicates of files scattered on my computer, I usually just link to the file where it is, rather than making a copy and save it where the main file is (The Unfinder: Finding and reminding in electronic music is a paper that discusses that issue from the point of electronic music practices ). The upside then is that I don't have multiple copies, the downside is if I want to move my file to another location, all links are broken. When I want to publish my presentation on my web site this becomes an annoying issue.
I could change my behaviour (should) but that doesn't help the close to hundred files I have already. Hence, I wrote a little bash script to help me out. This regex finds all the files linked to in an org file if linked by an org-link:
gs=$(grep -e ^.*file:.*\.[png\|jpg\|jpeg\|pdf] $1 | sed -e 's/\[\[file:\(.*\)\s*\]\]/\1/g')
and this bit gets all other image files (backgrounds etc)
reveal=$(grep -e ^:reveal_background:.*\.[png\|jpg\|jpeg\|pdf] $1 | sed -e 's/:reveal_background:\(.*\)/\1/g')'s/:reveal_background:\(.*\)/\1/g'
After that I merely loop over these lists and change all links to a defined export directory with an 'img' directory that takes all images in the $gs variable:
for path in $gs do # list all images with absolut paths and copy them to the new location if [ $(echo $path | grep -c "^~") -eq 1 ] then p=$(echo $path | sed -e 's/~/\/Users\/henrik_frisk\//g') cp $p $export_path/img/ else # list all images with relative paths and copy them to the new location cp $file_dir/$path $export_path/img/ fi done
I then copy all files to the right locations in the export directory.
for path in $gs do sed -i "" "s#file:$path#file:img/$(basename $path)#" "$1" done
After all operations are carried out I have the source file and all images neatly collected in one single directory. I also created a 'slides' page in Jekyll where I collect all my presentations. The script goes on to create an entry with a presentation preview and a link to the full actual presentation. It will only create a node if one doesn't exist already . The sed command here is a bit convoluted (it is not recommended to use sed to edit XML code, there are much better tools for this) but I preferred to stick with sed :
if [ $(cat ${jekyll_page} | grep -c ${file_nosuff}) -eq 0 ] then sed -i "" "/-----.*/a\\ $update_web_a $update_web_b\\ " $jekyll_page else echo "$file_name exists, will not update slides.md" fi
The variables update_web_a and update_web_b contains the actual HTML code that will be inserted, and $jekyll_page is the markdown page that has the list of published slides.
Now I can issue the command:
$ normalize_paths ../../../Documents/my_presentation.org ./export_dir/ 1
This will go through all the png/jpg/pdf files that are linked to in the file my_presentation.org and copy them over to ./exportdir/img. It will then continue to change the paths in the original file and copy it to ./export_dir. export_dir should now contain all the necessary files to export it anew.
I have a separate script that copies everything in the ./export_dir to my Jekyll directory and syncs it with the server. In the end it looks like this
This is not the complete script but gives a rough idea about how it works.