[pmwiki-users] Feature request: Action lists in skins
Joachim Durchholz
jo at durchholz.org
Sun Apr 10 10:04:11 CDT 2005
Patrick R. Michaud wrote:
> On Sat, Apr 09, 2005 at 01:28:14PM +0200, Joachim Durchholz wrote:
>
>>Now the real problem: Say the action page looks like this:
>>
>>||[[action 1]]||
>>||[[action 2]]||
>>||[[action 3]]||
>>...
>>
>>The skin author has no way of knowing what he's supposed to parse. The
>>wiki admin would have to override the parse function in some way, making
>>things even more complicated.
>
> The skin author still just grabs the <a> tags (after the above has
> been converted from wiki markup into HTML). It's the same regex I gave
> earlier.
Hmm... this places quite rigid restrictions on what can go into a
ActionList wiki page, yes?
>>Another "supposed to parse" problem is how to differentiate between
>>"wrapper" and "separator" elements. Say if the structure is
>>[[action 1]] * [[action2]] * [[action3]]
Sorry, I was confused about the context from which the skin is supposed
to parse.
Your strategy is to have an action list on a wiki page where each action
goes to a separate line. (Maybe similar to wiki trail index pages.) If
that's the case, there's no variation.
Did I get that right?
>>>> I'd find it definitely simpler to ask him to insert a special "actions
>>>> go here" markup in the proper place.
>>>
>>> $ActionListFmt
>>>
>>> looks pretty darn simple to me.
>>
>> If that's a proposed new variable: well, I'd like to do it that way, but
>> it wouldn't scale to the more advanced scenarios.
>
> This is exactly what I proposed in my post introducing slice&dice --
> see http://pmichaud.com/pipermail/pmwiki-users/2005-April/012151.html .
Ah, I had forgotten about that.
Well, yes, then I see how a function could make this scale.
I still don't like the idea of parsing. Call it intuition if you like :-)
>>>> Not easy at all. Try to deal with
>>>>
>>>> <a href="..." title="This has an <a> tag.">...</a>
>>>>
>>>> Sure, that's unlikely to happen with an action link,
>>>
>>> It's not only unlikely to happen, it's illegal HTML. The "<a>"
>>> in the title attribute has to be coded as ">a<" .
>>
>> Nope, it's perfectly legal. I checked it late night yesterday on
>> w3c.org. (I can provide the code tomorrow, when I get back to the
>> machine where the example code lives.)
>
> Oops, you're right, it's legal. I was misreading the HTML 4 specification,
> which says (http://www.w3.org/TR/html4/charset.html#h-5.3.2)
>
> Similarly, authors should use ">" (ASCII decimal 62) in text
> instead of ">" to avoid problems with older user agents that
> incorrectly perceive this as the end of a tag (tag close delimiter)
> when it appears in quoted attribute values.
>
> Regardless, it's rare.
Agreed.
Though I was giving that only as an example of the kind of things that
can bite you. Synthesising data is simply the safer route, and I'd like
to have a PmWiki that's as reliable as it is today.
>>php4 -a
>><?php
>>preg_match_all(
>> "/<a\\s.*?<\\/a>/is",
>> '<a href="1" title="</a>">X</a>',
>> $anchors);
>>var_dump($anchors);
>>
>>It's ending with the </a> in the title= attribute.
>
> I'm perfectly willing to let this extremely rare case trigger a bug,
> especially since there's a simple workaround. Or, saying that we
> should let the rare cases drive us into more complex implementations
> is incorrect -- it's okay if the rare cases are hard(er) to solve
> if the common cases are easy.
Say "extremely rare" means there's one problem in 100 installations,
then at just 70 such problems, about 50% of each installation will
experience some buggy behavior.
To contradict myself, I'm doing some "HTML pass-thru" markup extension
right now, so I'm currently designing a regex that can parse HTML
attributes. ;-}
>> I'm pretty sure that most skin designers won't get the regex right on
>> first attempt. Particularly if they have to modify the regex (say, to
>> cover constructions like "<tr><td><a .../a></td></tr>"); they'd overlook
>> case insensitivity, or greediness, or the potential for attributes;
>> they'd have to parse comments (which means that they'd have to skip any
>> <a href.../a> within a comment).
>
> If all we want to do is extract the anchors, then why do we have to
> parse out table structures? I don't get it. And parsing comments
> is again another very rare case (which I doubt has occurred in real
> application). If a simple solution works for 98% of the cases, I'm
> perfectly willing to let the 2% odd cases have to work harder.
I had overlooked that the wiki page doing the links is going to be
rather restricted.
>>>I can't. We would have to provide a bewildering array of options
>>>and arguments to the recipe writer or wiki administrator, and
>>>neither one is going to have a clue what is output at the end.
>>
>>Nobody is interested how the HTML looks like if it works as intended.
>>
>>Yes, it would be a large array of options and arguments. However, the
>>wiki admin can pick exactly those that are of interest to him, and
>>safely ignore the rest.
>
> How does a wiki admin know exactly which ones are safe to ignore?
Facilities and options. Those that don't interest him are irrelevant to
him. That's what I meant with "safely".
> And since the data structure doesn't look like HTML (which the admin
> is somewhat familiar with), how does the wiki admin know which options
> are supported versus which are not?
Well, he's already doing config.php. In 99% of the cases, he'll simply do
include_once('cookbook/some_action_handler.php');
In one or two cases, the recipe author tell him that if he doesn't like
that ?action=pdf will, by default, open in a new window, he'll have to
add a line of
ConfigureAction('pdf', 'target', '');
Doesn't look very complicated to me...
[Note to self: give sample documentation for wiki admins for each effect
present in the currently available skins.]
> Wiki admins are very interested in what the HTML looks like
[I take this to mean "are interested how the specifications are
transformed into HTML" - looking at the resulting HTML is easy enough,
just tell the browser to do so, and I've been doing that myself if
things didn't work as expected.]
> when he/she can't get things to work as they intend. If everything
> works perfectly the first time, then yes, there's no problem, but
> coding web pages is often a lot of trial and error and figuring out
> what the HTML is telling the browser to do. One is almost forced
> to care what the HTML looks like -- that's the nature of the beast.
Well, yes... though I don't see specifying a bunch of action links as
"coding a web page". True, there's a lot of things that can go wrong,
but I think that's something that might be a problem for whatever
approach is chosen.
However, there's always the HTML result as a diagnostic output.
Regards,
Jo
More information about the pmwiki-users
mailing list