Archive for category Programming

ExtJS Bug – Form doesn’t submit

Let me start off by saying that I love the ExtJS framework and it has been a pleasure to learn it over the last few days. It is probably the most professional JavaScript framework I’ve seen, thus the reason I wanted to add it to my latest app. The documentation is very thorough and it’s very easy to learn.

However, I’ve spent most of my day (when not taking care of kids and doing school work) trying to figure out why a simple form I’ve created doesn’t submit. The thing that really had me perplexed is that almost the exact same code worked for another form on another page. It was frustrating because I just knew it was something I was doing wrong.

Perhaps the most frustrating part about it was the fact that it was a bug in the framework itself. From what I’ve since found by researching on their forums, the bug was reported a few versions ago. There’s a work-around and I’ll get to that in a bit, but I want everyone to see the code.

var dbPanel = new Ext.form.FormPanel({
		id     			: 'dbPanel',
		name   			: 'dbPanel',
		height 			: 'auto',
		width  			: 'auto',
		standardSubmit 	        : true,
		layout 			: 'form',
		method 			: 'POST',
	        url    			: 'db_verify.php',
		border			: false,
		bbar			: tb,
		keys			: [{
		     key	: Ext.EventObject.ENTER,
		     fn 	: verifyDB
		}]
});

This is the code that doesn’t work. It’s a basic form and it should POST data to the db_verify.php page. The “standardSubmit : true” sets the form panel to use the old standard submit instead of Ajax. Here is another example that works:

var loginPanel = new Ext.form.FormPanel({
		id			: "loginPanel",
		height		: 'auto',
		width		: 'auto',
		layout		: 'form',
		border 		: false,
		standardSubmit	: true,
		url		: 'login.php',
		method		: 'POST',
		bbar		: tb,
		keys 		: [{
		   	key: Ext.EventObject.ENTER,
			fn : doSubmit
		}]
	});

There’s very little different in these two instances of FormPanel. The only difference I could find was that the first one doesn’t work and the second one does. In fact, I changed just about every option three times or more just to make sure I wasn’t missing anything. Everything I did gave me the same result. The page would refresh to itself and my form data would just disappear.

The eventual fix for the problem is to manually set the DOM action for the form when the handler is fired. So, for the first code listing, my handler went from looking like this:

var verifyDB = function(){
     dbPanel.getForm().submit();
};

To looking like this:

var verifyDB = function(){
    dbPanel.getForm().getEl().dom.action = 'db_verify.php';
    dbPanel.getForm().submit();
};

The first handler worked perfectly well with the other form submit. For some reason, it just seems to randomly decide it isn’t going to work for this scenario. It’s an easy fixed, but when you are trying to learn a new framework it’s not good to deal with a bug like this during your first few days.

, , ,

1 Comment

Twutils.com

I’ve starting a new website and have almost completed development on the first tool. It’s a site devoted to Twitter tools. I call it Twutils. The first utility is a spam removing tool called Spit Remover. I’ve settled on “Spit” as a good name for Twitter Spam. I’m in the process of moving the site to a new host due to DNS issues on the previous host. A few other ideas I have for Twutils are:
1.) Tweet Scheduler
2.) Follower generator
3.) Unfollow those that don’t follow you (like Huitter.com’s Mutuality.

I’m also planning to keep track of users who are removed with the spit remover. I may use this to show blacklisted spammers. I may generate a list of the most removed spammers, and allow people to remove these people automatically. Or I may just use it to create the biggest spammers list.

, ,

No Comments

Warning: simplexml_load_file() [function.simplexml-load-file]: URL file-access is disabled in the server configuration

If you’ve seen that error message you’ve probably happened upon a security feature that your shared web hosting provider has enabled. There are a few work-arounds for this error but most require you to have certain privileges on the server that you probably don’t have. Quite frankly, if you are getting these errors you probably don’t have the ability to change these settings yourself.

Rather than try to get the provider to change these settings (let’s face it, they have this enabled for a reason and surely someone else has already tried to get this changed, right?) one can easily get around this with Curl. In most cases, curl will be enabled on the server. So here is the quick and dirty way to get around it:

Create a PHP file and name it anything you want. For the sake of this article we’ll refer to it as curl_functions.php. In this file put the following functions:

<php
function setupMyCurl() {
   $myCurl = curl_init(); 
   $temp = curl_setopt($myCurl, CURLOPT_RETURNTRANSFER, 1);
   return($myCurl);
}
define("myCurl", setupMyCurl());
function curl_get_contents($url) {
   $temp = curl_setopt(myCurl, CURLOPT_URL, $url);
   return(curl_exec(myCurl));
}
?>

Include or require this file. Then, all you have to do is use the curl_get_contents($url) in your code to pull in the xml to a string. Then use the simplexml_load_string() instead of simplexml_load_file(). This will give you the same results but works around the url fopen feature. If you don’t have curl enabled on your host, GET ANOTHER HOST. 🙂

, ,

2 Comments

Why do Google Search Results Change?

I was recently asked by my wife why Google search results change. I had noticed it before but didn’t spend much time dwelling on it because my first thought was that Google uses many locations and many datacenters to hand out search results. The varying results are differences in the data stored at each location. Depending on which datacenter you are getting results from at any given time, you can see a huge change in results. As an example to this I made a quick video to show how going through a proxy server can make Facebook and youtube unblocked if they’re blocked.  In this video I’m going through a Linux server in Texas at first. Note the total results for the keyword while going through the proxy are 282. By removing the proxy and refreshing the search the number changed dramatically to 635,000 results.

I saw a video explanation of this behavior that stated that Google was a beach, and while I enjoyed the analogy, it isn’t entirely correct. There is a lot happening on the internet, but there’s no way Google can index it all at once, or even catch it all. That’s why they have many data centers, each pulling their own part of the weight. I’d imagine that the synchronization of the data takes time, that is if they actually synchronize the data at all. It may be that Google does this to randomize search results a bit in order to gauge relevancy of each result. At any rate, the keyword results can vary.

Also, after making this video, I captured the packets using wireshark and found that the request from my home internet connection was querying IP 208.67.217.231 and my proxy server is pulling the query from 74.125.159.103. Also neither of the search results were correct. After digging into the other pages of results there is a total of 64 results omitting the repeats. ICHY reports that the keyword has 3,640 competition. So, from what I can see of the data on both sides, ICHY doesn’t report very accurate competition numbers according to their own explanation of the relevant results. Other keywords in their list yielded similar results discrepancies.

, ,

No Comments

Make Twitter Better

One of the things I like about Twitter, believe it or not, is the simplistic design. There’s not a lot of useless options. There not many things to click on at all really, compared to other sites. There are a few missing items that should be on the site, however. I just recently installed a Firefox extension that accomplishes everything I need.

I’ve found that a lot of the twitter “clients” are lacking. For instance, I can’t easily search and follow people from TweetDeck. I really like to just use the web client. I only thing that I could use on the web interface is a notification of @ replies. Every other option that I found useful in the clients is now available on the web client via Power Twitter Firefox extension. There’s also a few features I wasn’t expecting. For instance, Song.ly is now integrated. I had never tried Song.ly until I installed this extension. I love it.

Check out the extension. It’s worth it if you Tweet much at all.

,

No Comments

International Whats-hot-weekly.com

I finished up modifications to whats-hot-weekly.com. There is now a drop-down list which lets the users choose their country. This is all handled with eBay site IDs. Categories are still being stored in a MySQL database locally on the server. This make the menus function much better. I’ve already noticed many people in the UK, Canada, Spain, France, and Australia are using the site. Currently the site is averaging over 100 unique visitors per day. This isn’t great but it’s nice to have even that many people interested in your work.

1 Comment

Really cool online auction idea

At this point, most people have heard of eBay. A high percentage of them have probably signed up for an eBay account and have made a purchase on the online auction site. EBay has been the defacto online auction site for years. Through those years many other auction sites have tried to get some of that market share with little luck. Most just didn’t have the right marketing. After all marketing is a major factor in whether a site succeeds or not. One can have the best site on the internet, and if no one knows anything about it, it’ll never rank well on search engines or become a high traffic site.

Sometimes there are ideas that just can’t be ignored, even if they have suffered from poor marketing. This is especially true for one of the coolest sites I’ve come across recently. It is a great idea and already there are many sites trying to copy the idea. The idea is simple really, but it is ingenious. This site is an auction site that works slightly different. It is possible to win items and end up paying pennies on the dollar for the item’s worth. As an example, a Macbook which normally sells for around $1300 could sale for $200 bucks.

What’s the catch?

Well, the catch is all in the way you bid on the items. It’s more of a competition. It’s actually pretty addictive. Bids are incremented by a certain amount. There are regular auctions and penny auctions. The regular auctions increment by a larger amount, while penny auctions go up a penny every time someone bids. Any time a person bids, it costs them a certain amount of money. I believe the current going rate is around 75 cents per bid. When they bid, the current bid price goes up a certain amount and they become the high bidder. These auctions are also timed, but every time a person bids, it adds a little more time to the auction. So if one waits till the last second to place a bid, hoping to be the last bidder, the time goes up and a bunch of other people jump on board.

Now it doesn’t go on like this indefinitely. There is a permanent end time for auctions as well. This is pretty easy when you understand what’s going on.

It is very possible to get a great deal on something on this site. I’ve seen items go for 10% of the retail cost with the bidder only spending about ten bucks for bids. There are TV’s, game systems, laptops, and many other things that get auctioned there at amazing prices. Check it out here!!

,

No Comments

Updating Whats-hot-weekly

My task for this morning is to add additional site codes to whats-hot-weekly.com. It’s not an easy task. At least, it isn’t as easy as I thought it would be. All of the categories are pulled from a database that I refresh every so often. The database only contains the categories for the US eBay site. This is where it gets hairy because I wasn’t aware before now that the other countries used didn’t categories. So, now my task is to add all the categories so I can implement the additional sites.

This is going to take a while. The US along takes around half an hour. There are many other sites. I’ve recreated the database adding a field to record the site_id. Next I’ll run my scripts to populate the database with API calls. After I populate the database for the US, I’ll switch the site codes manually and add the next site. I would automate this but I would rather monitor it anyway so I thought I may as well do it manually.

The big issue that will come up later is currencies. I’ll have to adjust for that. I’ll also have to recreate the same code on the static version of the site.

While I wait for the database to refresh, the old site is still running as normal. I have a development environment that is a working copy of the production environment. I’ll make the switch there, then simply move it over to production from there. There should be exactly no downtime to the site. I hope.

, ,

No Comments

Just How Relevant is Google

We have all come to expect good things from Google. In fact, many of us have come to believe that Google is the best at everything they do. This is especially true with their original application, their search. For the past few years Google has ruled the search market. It got there by making the most relevant search results appear every time a user executed a search.

Those days may be over. Recently, I’ve noticed a trend that Google isn’t giving very relevant content. I don’t think I’m alone, and I have a pretty good idea why we are getting such bad results. It’s not entirely Google’s fault. There are many “white hat” and “black hat” search engine optimizations that are being used to manipulate the results. Marketers are trying to draw content to their sites. That’s how they make money after all. SEO has been used ever since the first search engine. If you check this page, Google just seems to be lagging behind in making their algorithms detect unwanted SEO.

This isn’t to say that those marketers are doing something wrong exactly. It’s just that most of them are concentrating on Google. Google is, after all, the most popular search engine. They know what works to get ranked on Google, so they do it. They do a lot of it. This skews Google’s search results, but doesn’t necessarily effect Yahoo search results because Yahoo uses different algorithms to determine where a result ranks.

Google also is notorious for de-indexing RELEVANT sites by mistake. For instance, this site seems to have been de-indexed, and I’ve not been attempting to SEO this site much at all. It could be due to my use of a WordPress plugin called All-in-one SEO pack, though that really shouldn’t have anything to do with it either.

Here is my single example that has been perplexing me for a month or so:
Once upon a time, there was a script for the XChat IRC client, called “XLack”. This script is my favorite system information script for XChat. I’ve used it for probably 4 or 5 years. The home site for the script used to be xlack.tk. This was where everyone would go to download it. That site is now a parked domain. It has been that way for close to a year. If you search google for “Xlack”, xlack.tk is still the number one site. It is no longer relevant at all. It’s a parked domain. In fact, if one tries to find a site from which to download the xlack script, one finds that there are none listed on Google.com.

Now take that same search over to yahoo.com. A simple search for “xlack download” gives you this site, which has the relevant download link of the actual script. Yahoo.com provides more relevant results. Try it on any of your search and see if you don’t get better results from Yahoo or even Live.com. I guarantee you’ll have more success from them these days than you do Google.

, , , ,

No Comments

Automated twitter status updates

Once you have a following on Twitter, it’s easy to gather a little extra traffic to your site from it. To help automate the process, I make use of Twitter’s API and a Linux command line. I create some cron jobs to update my status using curl. This is pretty simple to do and may be helpful for people with a Linux box and the need to advertise something.

The curl command is structured as follows:
curl -u username:password -d status="My new status message" http://twitter.com/statuses/update.xml

Now it’s important to note that for the automated crons the returned xml isn’t really needed. You can also use this in the programming language of choice to fetch the xml and make use of it. You can also get JSON results by changing the end from .xml to .json.

So, once you have the code, all you have to do is create the cron jobs in Linux. Edit the crontab with crontab -e
Your default editor should open your cron. Here is an example showing how to create the cron job:
5 * * * * curl -username:password -d status="My Message" http://twitter.com/statuses/update.xml
That cron job would run at 5 minutes after the hour, every hour, every day. This is, however, not a good idea because your account will not last long 🙂

,

No Comments