Sunday, October 5, 2008

how long with drivers!

my attempts to make debian work in macbook still goes on and i'm struggling with driver issues one more time. win2k systems i've to deal with during my military service reminded me that drivers are still a problem source even if you're using a commercial system.

first of all, i don't understand how people are convinced to buy their computer from one company and operating system from another. they sell a hardware and their hardware doesn't mean anything without a software developed by another firm. not just PCs, even if when you buy a webcam, you need a computer and a working operating system (and this means 3-4 months for a regular windows user, afterwards system will begin meaningless crashes), maybe then your webcam can serve you if it wants, and if the manufacturer had written a driver for your system.

i think all had went to a wrong direction at one point, probably IBM's move to free PC standards is the breaking point. first of all why devices need to an external operating system to work, why they don't come with their own operating system inside?

unix is a system designed for file management, and today's operating systems are based on this system. first rule in unix is "everything is file". in other words, the ethernet card or the webcam you plugged-in to your computer are no more than simple files for unix. for output you write something to file, and for input you read something from file. and drivers are just translators between system and device in this read-write operations. that is to say, computers are consisting of two concepts: "input-output", "encode-decode". it was so in 1900s and it will be so in 3000s.

but i hope, we won't deal with drivers in 3000s. in year 3000 when i board on a space ship, if the ship asks me driver CD for the device i brought, i can burn that ship.

thanks god, nowadays bell labs are working on plan9 project which will fix the wrong direction turned with unix by thinking everything as a file. the new system takes internet as a model, and this makes me really hopeful about future.

making any device work shouldn't be harder than an adsl modem. after making connections right, it must be enough to open an address in browser to do configuration, and make it work.

consider that we're in a space ship and you want to send a picture of you to your friends at home. there are many smart devices around and they have only two questions in mind: "where i will take input from?" and "where i will send output?". questions about encode-decode are already placed in their mind when they were born or afterwards, as hardware or software. what you need is a webcam, a file server to record taken picture and an e-mail server to send message.

now let's answer to questions of the devices:

in webcam configuration page: when the display is frozen send output in a file format with this settings to this file server, for input use this shortcuts in this keyboard, this shortcut is to freeze display and this shortcut to make it goes on.

in file server configuration page: for input accept the display streams from this webcam, save frozen image to this path and send it to this e-mail server with this settings as output.

in e-mail server configuration page: as input accept data coming from file server, when transfer is complete send message to this e-mail address as output.


if you don't want to save picture, you can prefer not to use a file server. it is enough to change webcam settings to connect it directly to e-mail server.

unfortunately today it's not possible to scale or easily change components of a PC in this way, according to your needs. because our products are useless without a PC. we are in a world where PCs are masters and devices are slaves. we are only allowed to use our devices with limits of our PC. but if IT sector had progressed on devices based on sharing and communication at the beginning, now we would be able to use our devices with their real power without any dependency to the system we're using, or maybe there will be no system at all.

as sayed in sun's mission: "the network is the computer", i hope it will.

Saturday, September 27, 2008

back to the debian!

i'm close to end of my second year with my macbook, i got suspicious from strange behaviors on my network traffic a few days ago, and decided to use snort+base to understand what's going on. base is a php application that requires gd library. but unfortunately php coming with leopard doesn't support gd. i googled some and learned that i have to compile a new php. that's an irritating point of mac, there are something coming with system, and when they are not enough, you have to install something from darwin-ports or somewhere else, and at the end you have two gcc and two php on your system. this turns your system into a time bomb ready to blow. it's possible to get accross with problems hard to diagnose, because of conflicting libraries.

in this angry mood, i look at myself and recognized that in two years i'd become a man that gets suspicious with his network traffic; a man that gets upset because he is not able to buy music from itunes store, because he lives in turkey; a man that closely watches every advertisement of apple products. i'm snatched with steve jobs' wind too much and became a man with unneccessary anxieties.

i typed "ps" in my console and came accross with too many processes. i didn't have an idea about what most of them are doing. i remembered the process list of bekir's computer which uses xmonad as a desktop environment and i recognized that i'm losting control on my machine.

and i decided to return to debian. but it seems i was not so firm on my decision because i couldn't dare to remove macosx, and i installed debian as dual boot.

completed installation, in my first logon, for a second i was in same mood with a guest worker who comes back to his home after long years and kisses the ground. when i bought my macbook, compiz and xgl were new terms in linux world. i see gorgeous linux desktops on my friends' computers but i hadn't have chance to live on one of them. i must say that nothing is same at my home now. at these seconds, probably i was in the same mood with guest worker who has just returned to istanbul from germany and passing through the bosphorous bridge for the first time, and takes a look at the city.

but after a short time, broken asphalts and daily problems began to attract my attention. non-working hardware and a general slow performance. macbook is not a very strong computer but with macosx it really shows great performance and this increases the expectations from machine.

i'm looking forward to see ubuntu producing its' own computers, or a chinese producer does a real optimization, with a real R&D crew, on the linux distribution to be used on their product. otherwise, i think the software that's coded to run on any computer can't compute in performance with macosx software that's coded for a specific hardware.

it seems my return won't be so fast, i'm hopeful to fix performance and hardware issues after a little search, a kernel compilation and some configuration. first of all, i need to make it possible to give display to television and external monitors, and i need to make skype work with internal isight cam.

by the way, i strongly recommend snort+base to mac users, you can find a good howto document here. it catched quicktime's and other's traffics immediately.

Tuesday, July 22, 2008

"akbank yatirim" support is added to imkbizle

it's possible to retrieve imkb data from "akbank yatırım" as well as "işnet" with version 2.1 of imkbizle. so now if a problem occurs in any site, users can go on their life from other site.

and also, it's possible to retrieve extra data from sites now, and the first examples are USD and EURO exchange values, "USDTRY" and "EURTRY" items are added to the end of stock list (this was a user request from firefox addons site).

user test scenerios and unit tests are added to the repository. I guess I'm ready to nominate for my extension to be public.

Sunday, June 1, 2008

safest way to get around of internet restrictions

formerly, turk telekom was using a very simple way for internet restrictions. only in dns servers,  the address of original site was replaced with the address of the page where related court judgement is published. if you don't use turk telekom's dns servers, you'll be able to access forbidden sites.  

recently i realized that i can't reach the forbidden youtube in spite of not using turk telekom's dns servers. i guess turk telekom engineers got angry with the comments written on internet about how stupid they are that they can't even forbid a site in a proper way. they'd decided to cut the routes to the forbidden site finally (i can't really believe to the guys who writes these provocative messages, how can you think that a network engineer can be so stupid? i think it was just their goodness). 

anyway, with the latest restriction method, using a proxy server became a necessity to access forbidden sites. but using the first proxy address you've found on internet is a suicide. in this way, you're letting to a stranger to listen your internet traffic and you make yourself an easy target for "man in the middle" attacks.

if i were you, i wouldn't use any unknown dns or proxy server to get around of restrictions. nobody would want to increase their servers' traffic without a purpose, and most likely this purpose would be evil.

for now, the safest way to get around of these restrictions, seems like to use a socks v4 proxy that you would open via ssh on a server abroad if you have one and change your browsers' settings in order to use this proxy. 
# ssh -D 8080 user@hostabroad

above command will open a channel on 8080 port that will be used as an socks v4 proxy. for example, in firefox, if you change your settings to use "localhost:8080" proxy server as you can see below (from Preferences-Advanced tab, in connection settings);


all the traffic of your firefox would be streamed through the "hostabroad" server. so, the rules of your local internet supplier wouldn't be important for you, you can surf on internet within the borders determined by "hostabroad" server's internet supplier now.

unfortunately, because of the mentality that wants my beautiful country to become an iran, it seems, it'll be a necessity to rent a server abroad to surf on internet without trouble.

Sunday, May 25, 2008

imkbizle

i opened a new addon project for people whom may want to watch the imkb stocks in their portfolio from status bar while surfing with firefox:

multiple file upload is possible with jmeter after now

with my patch accepted, it became possible to send multiple files with jmeter http requests.
and with this patch, i managed to insert the word "hede" to the jmeter repository :)

Sunday, March 30, 2008

trying to run jmeter tests on macosx

if you're trying to run jmeter tests on leopard like me, and keep seeing errors such as;
(org.apache.jmeter.assertions.XMLSchemaAssertionTest)java.lang.IllegalArgumentException: http://java.sun.com/xml/jaxp/properties/schemaLanguage

I recommend you to retry after removing ~/Library/Java/Extensions/xerces.jar file. you should keep a copy, because you may need it in future for an another program.

Tuesday, January 8, 2008

web 3.0 - Semantic Web

(an article request came from emo after our "Web 3.0 - Semantic Web" seminar in bilgi university, this is the english translation of the article we'd made with bekir, you can see original article here.)

--

With spreading blogs, created social networks and growing personal media; the data on net reached tremendeous sizes. To prevent this data to become garbage and to prevent monopolies on internet search bussiness, it's mandatory to create a semantic web where there is a context between data, concepts and people. "Web-3.0" and "semantic web" concepts are usually called together. Even though there is no official authority to version web, "web-1.0", "web-2.0" ve "web-3.0" names are pretty popular among marketting people. These version numbers symbolizes eras on net whose milestones are big economic changes on internet. At "web-1.0" era, we got acquainted with commerce on internet. At "web-2.0" era, e-communities grew with rich and distrubuted online content which gave direction to economy on internet. The value of the companies began to be measured with number of active users. By time, customization became important on net. Determined identities and user habits allowed to user specific appearances, but customizable appearance is not enough, we also need user specific service (functionality). Search engines are competing with each other to give different results according to person who is searching. Is internet ready to that change? We're still using special keywords for our website to be found by search engines. The content for people and machines are seperated. For real user specific search results, it's mandatory to associate online content with each other and with people. It's not possible to ask simple questions which are easily understandable by a human to search engines with lack of this association. Web-3.0 is a new era where some major changes on internet waits for us: Internet will be based on content associated with each other; it will be possible to make up simple sentences from built context; internet itself will become a huge database; we'll be able to ask simple questions to machines, and machines will be able to talk with each other with an human-understandable language to find answers to our questions, briefly where machines learns human-talking; service and server centered approaches will leave it's place to user-centered distributed structures. And semantic web is the infrastructure designed for this era which will determine the rules.

First of all, we need words and dictionaries where we can look for the meanings of words in order to associate online contents and build a context. Today our websites have non-human visitors too. There are RSS readers that makes our publishings easily followable, planets that merge related blogs, search engines that makes our contents easily foundable and etc. We have to keep some metadata to take attention of search engines which are probably the oldest non-human visitors of our sites. Metadata is data about data. To determine these metadatas, we consider routine behaviours of human visitors who are searching something on net. By semantic web, it's aimed to merge data and metadata. We're looking for new methods to make online content able to explain itself. In this case, keywords won't try to explain whole website, every content will have it's own explanation, it's aimed not to leave a single content without a description. Where do we get these definitions? We can find definitions from dictionary like specifications. Applications which are written using same specifications can talk same language (FOAF (Friend of A Friend) is a good example to these specifications[8]).

After forming dictionaries, we can turn content-tag associations to simple sentences based on simple grammer rules. We can consider our content associations as verbals of our sentences. Let's assume that we have a website where we publish people's personal informations. We're looking for "personal information" word in our dictionary. If we find a definition, it means our sentence is ready. Subject is content itself, object is our site and verb is "is a personal information". If we merge them, final sentence is "this content is a personal information of this site". And definition of "personal information" is so: "collection of knowledge about a human in which name-surname is mandatory, and nickname, web address, e-mail address, mail address, geo (latitude, longitude), photo, title, notes and identity number are optional". And let's assume we have a name-surname "john doe" published on our site, we can make a second sentence: "john doe is a name-surname of personal information".  After tagging our content with "personal information" and "name-surname" words, it's possible to use these tags to do any visual and textual design tricks to make our site easy to understand by human. And with the context we settled between "site", "personal information" and "name-surname" words, it's possible to make new sentences that's understandable by both humen and machines (ex: "john doe is a name-surname in this site" which is not written on site but easy to reason by simple logic rules).

With all these relations and sentences, it becomes possible to run queries on our site like a database. In figure 1, wee see a SPARQL (SPARQL Protocol and RDF Query Language) which runs on a site where there is an "include" relation between continents and countries, and "being a capital" relation between countries and cities. The names of countries which Africa includes, and the names of cities of these countries is queried. When we say query, SQL queries are flashed firstly on our minds because of widespread use of databases at the present time, this shows the cause of syntax smilarity between SPARQL and SQL syntaxes. The important point here is making an infrastructure that lets this kind of abstraction.

Figure 1:
SPARQL sample
\begin{figure}\begin{center} \footnotesize \line(1,0){230} \begin{verbatim}PRE... ...nContinent abc:africa. }\end{verbatim} \line(1,0){230}\end{center}\end{figure}

End point of query language abstraction is human-talking. It's aimed to make it possible for machines to answer a question like "What are the countries does Africa contain and what are the capitals of these countries?" instead of the SPARQL query in figure-1.

We need an infrastructure that lets machines to ask questions to each other too, not just human-machine talking. Think the scenerio where the query on figure-1 is run on two sites. One site holds the "include" relation between continents and countries, and other holds "being a capital" relation between countries and cities. First site is capable of answering question "What are the countries does Africa contain?", and it will need to ask question "what is the capital of this country?" to the other site for every country that Africa contains, and it will return the accurate answer for the user who asks combined question, with the information it collected from other site.

Communication between machines is not a new issue, but semantic web will let human-understandable comminication. For above example, one site asks to other "what is the capital of  Y country?" for every country in Africa. Human who listens the network will understand what the two machines are talking about, without reading any spec of any protocol, without taking any education to read that specs; because the protocol used between two machines will be a simple protocol based on simple grammer rules of daily human-talking.

Semantic web occurances needs user-centered, distributed structures instead of today's service-centered structures. There shouldn't be any restriction to force us to hold our contact list and e-mail account in same service of same company. It's aimed to make a contact list that we created in one service to be easily accessible from any email account service we owned. For this, the structure in which every user has an seperate user account for every service has to leave it's place to a structure in which there is one unique identity for every user, in which every service asks for that identity to users like a passport. Users and publishers will be able to decide which language to talk by determining their dictionaries. Distributed, endless and secure sharing environment is a requirement for semantic web.

Figure 2:
XHTML to RDF translation by XSLT
\begin{figure}\begin{center} \footnotesize \line(1,0){230} \begin{verbatim}<ht... ...-01-0 </rdf:Description>\end{verbatim} \line(1,0){230}\end{center}\end{figure}


"W3c"[1] working groups and the standards gives direction to semantic web researches. Studies that aims to turn internet into a database continues under two main focus. One way is to use RDF (Resource Definition Language)[3] and OWL (Web Ontology Language)[4] like languages to make semantic sentences and developing applications using that sentences. There are many tools developed according to W3C standards at present to create RDF and OWL, and many tools to run SPARQL queries on these RDF and OWLs, there are also many tools on development process. The number of publications about semantic web and the sizes of developer groups working on semantic web are pretty satisfying. The other way for semantic web studies is GRDDL (Gleaning Resource Descriptions from Dialects of Languages)[5] way in which it's aimed to change existing content a little to make it possible to translate it directly into an semantic format (see Figure 2). In this way tag attributes are used to create RDF or any other format from directly existing content. A well-known method based on this tag logic is "microformat"[7] whose co-founder is a Turk named Tantek Çelik who is an affective "W3C" member. With firefox extension "Tails Export"[9], you can see if the page you're browsing contains any content that is compatible with any microformat, and see what these contents are if exists any. You can download and try an another firefox extension "piggy bank"[11] that's developed under "smile"[10] project to investigate existing semantic web sites and portals. There are many existing applications of semantic web, for well-known dictionaries;

  • SKOS Core[12]

  • Dublin Core[13]

  • FOAF[8]

  • DOAP[14]

  • SIOC[15]

  • vCard in RDF[16]

well-known projects;

  • Pfizer[17]

  • NASA's SWEET[18]

  • Eli Lilly[19]

  • MITRE Corp.[20]

  • Elsevier[21]

  • EU Projeleri (ex: Sculpteur[22], Artiste[23])

  • UN FAO’s MeteoBroker

  • DartGrid[24]

  • Smile[10]

well-known portals;

  • Vodafone's Live Mobile Portal

  • Sun’s White Paper Collections[26] and System Handbook collections[27]

  • Nokia’s S60 support portal[25]

  • Harper’s Online magazine linking items via an internal ontology[28]

  • Oracle’s virtual press room[29]

  • Opera’s community site[30]

  • Yahoo! Food[31]

  • FAO's Food[32]

  • Nutrition and Agriculture Journal portal

and so on.

Semantic web researches took a long way on their direction. In figure-3, semantic web concepts are explained with layers. Till last 3 layers, there had been concrete steps with RDF, OWL, SPARQL and GRDDL. For "logic", "proof" and "trust" layers, there is no concrete output, but "W3C" working groups continues their searches. By "logic" layer, it's aimed to make new sentences via reasoning methods with simple logic rules by reading existing sentences. "Proof" layer will determine how to form evidences for establishing truth of our logic. The top layer "trust" will contain solutions to securely authenticate user and to protect people's privacy rights and confidential information.

Figure 3:
Semantic Web Layers
\begin{figure}\begin{center} \epsffile{swlevels.eps}\end{center}\end{figure}

Semantic web is not a dream, it's a need. Today's millions of internet users will be billions very soon. We need to give a meaning to online data mass which is exponentially growing via fastly growing internet population and spreading sharing culture. Semantic web suggests a solution to publishers to be easily foundable, and to searchers to fastly access right information. Context between sites on settled relations turns every site to servants with a weak artificial intelligence who is capable of talking with other servants and humen. For now we can only follow internet from narrow windows provided by today's search engines. In near future, our searches will start a fire that follows the paths built by semantic relations like a dedective, and it will find the accurate answer to our question. To be able to search on your local network, there will be no need to a search engine that have to visit all the sites on your local network and index them previously, all the search will be real-time. We will be able to determine our own limits ourselves, we can narrow or enlarge our living environment on net whenever we want. The detail of search we want to do, will not depend on the power of search engines any more, our choices will determine our power.

References:
[1] http://www.w3.org/
[2] http://www.w3.org/2001/sw/
[3] http://www.w3.org/RDF/
[4] http://www.w3.org/2004/OWL/
[5] http://www.w3.org/2001/sw/grddl-wg/
[6] http://www.w3.org/TR/rdf-sparql-query/
[7] http://microformats.org/
[8] http://www.foaf-project.org/
[9] http://addons.mozilla.org/firefox/2240
[10] http://smile.mit.edu/
[11] http://simile.mit.edu/piggy-bank/
[12] http://www.w3.org/TR/swbp-skos-core-guide/
[13] http://www.dublincore.org/
[14] http://usefulinc.com/doap/
[15] http://sioc-project.org/
[16] http://www.w3.org/2006/vcard/ns
[17] http://www.pfizer.com
[18] http://sweet.jpl.nasa.gov/ontology/
[19] http://www.lilly.com/
[20] http://www.mitre.org/
[21] http://aduna.biz/dope/
[22] http://www.sculpteurweb.org/
[23] http://users.ecs.soton.ac.uk/km/projs/artiste/
[24] http://ccnt.zju.edu.cn/projects/dartgrid/intro.html
[25] http://www.forum.nokia.com/
[26] http://www.sun.com/servers/wp.jsp
[27] http://sunsolve.sun.com/handbook_pub/validateUser.do?target=index
[28] http://www.harpers.org/
[29] http://pressroom.oracle.com/
[30] http://my.opera.com/community/
[31] http://food.yahoo.com/
[32] http://www.fao.org/
[33] http://www.w3.org/2001/12/semweb-fin/w3csw
[34] http://www.w3.org/2007/Talks/0831-Singapore-IH/
[35] http://www.w3.org/2007/Talks/0424-Stavanger-IH/
[36] http://www.w3.org/TR/grddl-primer/
[37] http://en.wikipedia.org/wiki/Semantic_Web
[38] http://en.wikipedia.org/wiki/Ontology_%28computer_science%29
[39] http://en.wikipedia.org/wiki/Web_Ontology_Language
[40] http://en.wikipedia.org/wiki/Resource_Description_Framework
[41] http://en.wikipedia.org/wiki/GRDDL
[42] http://sramanamitra.com/2007/02/14/web-30-4c-p-vs
[43] http://www.ozgan.net/?sm=content.ybz&id=63
[44] http://xmlns.com/foaf/spec/