Tag Archives: Javascript

jQuery: How to extract a tag from an HTML response

Making a website, and using ajax for some things. Sometimes things fail and return custom error pages. I made them to be helpful, but since you can only see them in the browser developer console, they were a bit of a hassle to look at.

To see what the error was much easier, I figured I could just parse the returned HTML, extract the message I knew was there, and insert it into the page that way.

And you’d think the following would work fine:

$(document).ajaxError(function(event, jqxhr, settings, error)
{
    // Find the message in the response HTML
    var m = $(jqxhr.responseText)
        .find('#message');

    // Except .find() doesn't find anything

    // And we replace our DOM with nothing
    $('#content')
        .replaceWith(m);
});

But… No… Apparently, since the response was a complete HTML page, i.e. including html, head and body tags, jQuery was getting a bit tricked up when trying to parse it. Actually not sure if it’s jQuery or native browser parsing behind that’s causing it, but where there’s a will, there’s a way:

$(document).ajaxError(function(event, jqxhr, settings, error)
{
    // Find the inner HTML of the body tag
    var body = /<body.*>([\s\S]+)<\/body>/
        .exec(jqxhr.responseText);

    // Parse the HTML
    body = $.parseHTML(body[1])

    // Append the HTML to a non-special root tag
    body = $('<output>').append(body);

    // And *now* we can finally find our message
    var message = body.find('#message');

    // And add it to our DOM
    $('#content')
        .replaceWith(m);
});

¯\_(ツ)_/¯

PHP: Easy Javascript compression with closure compiler

Here’s how to fairly easily compress/minify Javascript by using the Closure Compiler Service API.

More or less verbatim how I’m currently doing it over at bibelstudiet.no in my script controller class, and so far it’s been working great. Just remember to cache the result on your server so that you’re not calling the API more than necessary. In my case it’s handled further up the chain by the parent class 🙂

<?php
ob_start('ob_gzhandler');
header('Content-Type: text/javascript; charset=utf-8');

$files = ['foo.js', 'bar.js'];

$js = array_map('file_get_contents', $files);
$js = implode(PHP_EOL.PHP_EOL, $script);

$c = curl_init();
curl_setopt_array($ch, array
(
    CURLOPT_URL => 'https://closure-compiler.appspot.com/compile',
    CURLOPT_POST => TRUE,
    CURLOPT_POSTFIELDS => http_build_query([
        'language' => 'ECMASCRIPT5',
        'output_info' => 'compiled_code',
        'output_format' => 'text',
        'compilation_level' => 'SIMPLE_OPTIMIZATIONS',
        'js_code' => $js,
    ]),
));

if(curl_exec($ch) === FALSE)
    http_response_code(500) and exit("// ERROR: ".curl_error($c));

if(curl_getinfo($ch, CURLINFO_CONTENT_LENGTH_DOWNLOAD ) <= 1)
    echo $js;

curl_close($ch);

Note that if the compilation fails for any reason, the response will be empty. In that case I’ve chosen to output the original Javascript instead so the site doesn’t suddenly break. To check what went wrong, you need to repeat the request, but output_info set to errors instead of inline=”true”>compiled_code.

See their API documentation for more information about the parameters.

Simple PHP proxy for cross origin HEAD requests

Update: Refactored this into a much more improved version, and you can find it on GitHub and Packagist.

Have some Javascript that runs through all the URLs on a site of mine to prep caching and check for dead links and other problems. Problem is that the site also includes embedded video files hosted by someone else. These fails of course because of cross origin restrictions and since I’m not in control of their servers I needed a workaround.

I wrote this small PHP script to work as a proxy and as far as I can see it works pretty great. Short and easy to follow as well, so that’s always fun 🙂

So basically the script just pipes through whatever headers it gets to the target URL and pipes back whatever headers it gets in the response. Seems to work, but let me know of weaknesses and easy improvements if you see any 🙂

<?php

$request_headers = getallheaders();
unset($request_headers['Cookie']);
unset($request_headers['Host']);

foreach($request_headers as $key => &$value)
    $value = $key.': '.$value;

$url = $_SERVER['QUERY_STRING'];
$limit = 20;

$curl = curl_init();
do
{
    curl_setopt_array($curl, array
    (
        CURLOPT_URL => $url,
        CURLOPT_HTTPHEADER => $request_headers,
       
        CURLOPT_RETURNTRANSFER => TRUE,
        CURLOPT_HEADER => TRUE,
        CURLOPT_NOBODY => TRUE,

        CURLOPT_FOLLOWLOCATION => TRUE,
        CURLOPT_MAXREDIRS => $limit--,
    ));

    ob_start();
    $headers = trim(curl_exec($curl));
    $url = curl_getinfo($curl, CURLINFO_REDIRECT_URL);
    ob_end_clean();
}
while($url and $limit > 0);

curl_close($curl);

$headers = trim(substr($headers, strrpos($headers, "\r\n\r\n")));
header_remove();
foreach(explode("\r\n", $headers) as $h)
    header($h);

Notes

  • The Cookie and Host headers sent from the browser are removed so they don’t mess up the request.
  • CURL wasn’t acting properly where I deployed this. It didn’t follow redirects, and curl_exec output content to the browser even with return transfer set to true. So it has a manual workaround for following the redirects and uses output buffering to make sure nothing goes to the browser before we want to.