Friday, August 5, 2011

Questions from Ph.D Entrance held at Jamia Millia Islamia


Some of the questions from Ph.D Entrance held at Jamia Millia Islamia on 23-July-2011

First section was based on Questions from the passage- Olmstead’s Central Park Design, New York.

(1) In which year Madam Curie got Nobel prize in Physics?

(2) UGC was founded in the year?

(3) Temperature of body is controlled by which organ? (Brain/Kidney/Heart/Adrenal)

(4) Deficiency of lipids leads to which disease?

(5) On 21st June sun is overhead to ………………………….. ?

(6) First Indian satellite launched from foreign soil ?

(7) The moon mission spending longest time in space?

(8) Speed of light was first measured by (name of scientist)?

(9) Which is the recent blood transfusion technique?

(10)Electromagnetic wave is reflected by which layer of atmosphere?

(11)The slogan” Each one , teach one”- is given by?

(12)In the sentence- What next? Here ‘next’ is which part of speech?

(13)In the sentence- She is sitting next to him. Here ‘next’ is which part of speech?

(14)Synonym of Clandestine? (Ans- Secret)

(15)Antonym of Dexterous?

(16)We should confirm…….rule.(to/with/of/at)

(17)This is supplement …………… the book. (of/to/for/at)


Apart from these there were 10 questions based on teaching aptitude. Those were very basic questions and usually asked in UGC Net and repeatedly being asked in Jamia. In order to do these ten questions just go through previous years solved papers and model papers from UPKAR publication.

The most important quality of agood teacher is

(A) Sound knowledge of subject matter

(B) Good communication skills

(C) Concern for students’ welfare

(D) Effective leadership qualities

One of the essential characteristics of research is
Generalizability
Usability
Objectivity
Repilcability

An investigator studied the census data for a given area and prepared a write up based on them Such a write-up is called:
Research paper
Article
Thesis
Research report

Wednesday, March 23, 2011

Introduction of Microsoft Dot Net Technology

Microsoft .NET Technology
- Windows Desktop Applications [2-Tier]
- Web Applications [ASP.NET]
[3- Tier / n - Tier ]

ASP.NET / C#
------------------------
- ASP.NET Technology, used to develop Web Applications
- Multi - Language Support
[Visual C#.NET, VB.NET, J#, JScript.NET]

Desktop Applications
---------------- 2 - Tier Applications ------ 1992 onwards


Client / Server Technology:
----------------------------------
VB + Any RDBMS
C++ with Any RDBMS
Delphi + Any RDBMS

C#.NET + Any RDBMS
VB.NET + Any RDBMS

Front-End Tools: VB, C++, Delphi, C#.NET, VB.NET
Backend Tool: Any RDBMS

----
DisAdvantages with 2 - Tier Desktop Application:
• Same OS MUST for all clients as well as Server
• Every Client MUST have installed the application
[Containing UI and B.Logic]
• A particular LAN protocol is MUST neccessary

Web Applications [3 - Tier , can be extended n - Tier]
-------------------------------------


• Client Tier [Any OS, Any Broswer]

• Middle Tier [At Server, Containing Application]
• Database Tier [At Server]

---------

Technologies for Web Applications:

♦ Proprietory Software
- May be FREE
- But, Source-code is NOT public.

• Microsoft .NET Technology
• Sun Java Technology

♦ Open-Source Software
- May be FREE
- Original Source Code is public.

• Mono.NET [www.go-mono.com]

• PHP [www.php.net]

• Apache [apache.org]
JBoss [Jboss.org]

Monday, March 14, 2011

Block Sites from Google's Results in the case of subdomins like blogspot.com


Working of Google Block Site Feature for blogspot or other blogger sites

1. Type Blogspot in Google search box

  1. Now block any blog on blogspot.com

    example - abc.blogspot.com/

  2. Now again search the same keyword

  3. Go through the blocked list....

  4. you will find all the blogs on blogspot.com have been blocked

( having this structure only *.blogspot.com)


Now, tell me is it fair?Why should others suffer .........

Kindly correct me i anything wrong here!

Thanks

Masroor


Tuesday, March 1, 2011

IIT Bombay Requires Software Engineers/Junior Software Engineers- Last Date: March18,2011

INDUSTRIAL RESEARCH & CONSULTANCY CENTRE
OFFICE OF THE DEAN(R&D)
INDIAN INSTITUTE OF TECHNOLOGY,BOMBAY.
No.DRD/Rectt/Project Advt. No.F-83/P(16)10-11 Dated: 18/02/2011
Advertisement No.F-83/P(16)10-11
RECRUITMENT FOR PROJECT


Applications are invited from the citizens of India for filling up the following temporary position for the sponsored project undertaken in the Department of Computer Science & Engineering of this Institute. The positions are temporary initially for a period of 1 Year and tenable only for the duration of the project. The requisite qualification & experience etc. are given below:

Project Code, Project Title & Funding Agency
09MHRD001 : "Empowerment of Student & Teachers through Synch & asynch instruction" (Ministry of Human Resource Development)
Position & SalarySoftware Engineer/Jr.Software Engineer (6 Post s )
Consolidated salary Rs.16000 to 22000/- p.m.
QualificationB.E/B.Tech/ or M.Sc/MCA/M.Tech/ or PG Diploma in IT/CS from reputed institutes. Proven track record of 1-4 years software development experience in one or more of the following: C/C++, Core Java, Joomla based web portals, JavaFx scripting, JEE5, PHP, PostgreSQL/MySQL and open source technologies and with exposure to all phases of software development lifecycle. Candidates having valid GATE score will be preferred. Fresh candidates, or candidates with less experience will be considered for Jr. Software Engineers. Skill Required: Software Engineering, UML Database concepts and applications- familiarity with RDBMS and SQL, database design, 3-tier architecture, Programming skills with Core Java, JSP, Servlets, struts, PHP, Joomla, MySql, JavaFx scripting and familiarity with apache JBoss, Tomcat and Joomla in Linux environment, Design and development of interactive websites and collaborative environments. Should possess excellent analytical and logical skills.
Job ProfileSoftware Engineers with 1 to 4 years experience.
Position & SalaryJr. Project Engineer/Project Engineer (4 Post s )
Consolidated salary Rs.16000-22000/- p.m.
QualificationB.E/B.Tech/ or M.Sc/MCA/M.Tech with 1-4 years hardware and engineering development experience in one or more of the following: computer hardware, micro processors and micro controllers, integration with other peripherals such as LCD panels, USB devices, back-end computer systems, assembly language programming, 8085/8086, c programming, Hyper Terminal/Debugger, Keil software and PCB etc. Candidates having valid GATE score will be preferred. Fresh candidates, or candidates with inadequate relevant experience will be considered for Jr. Project Engineers.
Job ProfileKnowledge of computer hardware, micro processors and micro controllers, integration with other peripherals such as LCD panels, USB devices, back- end computer systems etc.
Position & SalarySystem Administrator (2 Post s )
Consolidated salary Rs.16000-22000/- p.m.
QualificationBE/B.Tech/M.Sc/MCA with 1-4 years experience in Systems and Network Administration, preferably with RHCE/CCNA. Experience in handling web/App/DB servers on Linux platform, knowledge and experience in management of SAN and virtualized environments using open source tools such as KVM, Xen.
Job ProfileSystem Administrators with 1 to 4 years experience.
Position & SalaryProject Assistant (2 Post s )
Consolidated salary Rs.10000-16000/- p.m.
QualificationGraduate in any discipline preferably with six months to 1 year administrative experience. Skill Required: Good communication skills, working knowledge of computers, knowledge of typing and word processing, familiarity with office administration procedures (Accounts, Logistics and Dispatch, drafting, communications, filing, data management, event management etc).
Job ProfileProject Assistants with 6 months to 1 year experience.
Position & SalaryTechnical Assistant (4 Post s )
Consolidated salary Rs.10000-16000/- p.m.
QualificationGraduate in Science/Computer Applications preferably with six months to 1 year programming experience in one or more of the following: C/C++, Core Java, Joomla based web portals, JavaFx scripting, JEE5, PHP, PostgreSQL/MySQL and open source technologies and with exposure to all phases of software development lifecycle OR Diploma in Electronics & Telecommunication with min of 3 years experience in designing PCB and SMD soldering. Knowledge of designing Printed Circuit Board (PCB), Soldering SMD components, making rapid prototype, testing the board after soldering and SMD rework. Follow with vendors for PCB manufacturing.
Job ProfileTechnical Assistant in programming with six months to 1 year experience or Technical Assistant in hardware with 3 years experience
Position & SalaryResearch Associate (4 Post s )
Consolidated salary Rs.16000-22000/- p.m.
QualificationAny B.E/B.Tech/ or M.E/M.Tech with excellent subject knowledge in their respective domain. Excellent command over written English, knowledge of word processing. Flair for technical writing. Proven competence in writing publishable technical text and reports. Experience in using web technologies is desirable.
Job ProfileResearch Associates for content development/management.
The appointment is for time bound project and the candidate is required to work mainly for the successful completion of the project. The selection committee may offer lower or higher designation and lower or higher salary depending upon the experience and performance of the candidate in the interview.
Candidate possessing the requisite qualification and experience should apply online at http://www.ircc.iitb.ac.in/IRCC-Webpage/HRMSLoginPage.jsp , if there is any problem applying online, candidates may apply on plain paper stating the ADVERTISEMENT NO. , project title, position applied for, name,permanent and mailing addresses,date of birth,details of academic qualification and experience or download the Application form available on http://www.ircc.iitb.ac.in/IRCC-Webpage/projstaffinfo.jsp with the copies of certificates/testimonials and must super scribe the envelope with ADVERTISEMENT NO. & POST: to the Senior Administrative Officer(R & D),Indian Institute of Technology, Bombay, Powai, Mumbai-400076 so as to reach on or before 18th March, 2011 Candidates called for interview will be required to attend at their own expenses.

Tuesday, December 28, 2010

Checking Your Backlinks


Checking your Backlinks

Many people use the link:www.yourdomain.com (eg. www.airconditioning-florida.com ) operator to check their inbound links from the Google index. However, this operator only works from a sample of your links and also filters out any internal links (within your site) or those that are similar. There is a more advanced hack known only to a few people (until now!) that will trick Google into giving you a more complete set of results. Enter into the query box your domain name with a plus sign inserted between the dot and the TLD domain extension:

yourdomain.+com

However, the most reliable place to see all the links Google has recorded for you is to use the Google Webmaster Tools console.

Tuesday, June 29, 2010

Working of Google

Google runs on a distributed network of thousands of low-cost computers and can therefore carry out fast parallel processing. Parallel processing is a method of computation in which many calculations can be performed simultaneously, significantly speeding up data processing. Google has three distinct parts:

* Googlebot, a web crawler that finds and fetches web pages.
* The indexer that sorts every word on every page and stores the resulting index of words in a huge database.
* The query processor, which compares your search query to the index and recommends the documents that it considers most relevant.

Let’s take a closer look at each part.

1. Googlebot, Google’s Web Crawler

Googlebot is Google’s web crawling robot, which finds and retrieves pages on the web and hands them off to the Google indexer. It’s easy to imagine Googlebot as a little spider scurrying across the strands of cyberspace, but in reality Googlebot doesn’t traverse the web at all. It functions much like your web browser, by sending a request to a web server for a web page, downloading the entire page, then handing it off to Google’s indexer.

Googlebot consists of many computers requesting and fetching pages much more quickly than you can with your web browser. In fact, Googlebot can request thousands of different pages simultaneously. To avoid overwhelming web servers, or crowding out requests from human users, Googlebot deliberately makes requests of each individual web server more slowly than it’s capable of doing.

Googlebot finds pages in two ways: through an add URL form, www.google.com/addurl.html, and through finding links by crawling the web.


Unfortunately, spammers figured out how to create automated bots that bombarded the add URL form with millions of URLs pointing to commercial propaganda. Google rejects those URLs submitted through its Add URL form that it suspects are trying to deceive users by employing tactics such as including hidden text or links on a page, stuffing a page with irrelevant words, cloaking (aka bait and switch), using sneaky redirects, creating doorways, domains, or sub-domains with substantially similar content, sending automated queries to Google, and linking to bad neighbors. So now the Add URL form also has a test: it displays some squiggly letters designed to fool automated “letter-guessers”; it asks you to enter the letters you see — something like an eye-chart test to stop spambots.

When Googlebot fetches a page, it culls all the links appearing on the page and adds them to a queue for subsequent crawling. Googlebot tends to encounter little spam because most web authors link only to what they believe are high-quality pages. By harvesting links from every page it encounters, Googlebot can quickly build a list of links that can cover broad reaches of the web. This technique, known as deep crawling, also allows Googlebot to probe deep within individual sites. Because of their massive scale, deep crawls can reach almost every page in the web. Because the web is vast, this can take some time, so some pages may be crawled only once a month.

Although its function is simple, Googlebot must be programmed to handle several challenges. First, since Googlebot sends out simultaneous requests for thousands of pages, the queue of “visit soon” URLs must be constantly examined and compared with URLs already in Google’s index. Duplicates in the queue must be eliminated to prevent Googlebot from fetching the same page again. Googlebot must determine how often to revisit a page. On the one hand, it’s a waste of resources to re-index an unchanged page. On the other hand, Google wants to re-index changed pages to deliver up-to-date results.

To keep the index current, Google continuously recrawls popular frequently changing web pages at a rate roughly proportional to how often the pages change. Such crawls keep an index current and are known as fresh crawls. Newspaper pages are downloaded daily, pages with stock quotes are downloaded much more frequently. Of course, fresh crawls return fewer pages than the deep crawl. The combination of the two types of crawls allows Google to both make efficient use of its resources and keep its index reasonably current.

2. Google’s Indexer

Googlebot gives the indexer the full text of the pages it finds. These pages are stored in Google’s index database. This index is sorted alphabetically by search term, with each index entry storing a list of documents in which the term appears and the location within the text where it occurs. This data structure allows rapid access to documents that contain user query terms.

To improve search performance, Google ignores (doesn’t index) common words called stop words (such as the, is, on, or, of, how, why, as well as certain single digits and single letters). Stop words are so common that they do little to narrow a search, and therefore they can safely be discarded. The indexer also ignores some punctuation and multiple spaces, as well as converting all letters to lowercase, to improve Google’s performance.

3. Google’s Query Processor

The query processor has several parts, including the user interface (search box), the “engine” that evaluates queries and matches them to relevant documents, and the results formatter.

PageRank is Google’s system for ranking web pages. A page with a higher PageRank is deemed more important and is more likely to be listed above a page with a lower PageRank.

Google considers over a hundred factors in computing a PageRank and determining which documents are most relevant to a query, including the popularity of the page, the position and size of the search terms within the page, and the proximity of the search terms to one another on the page. A patent application discusses other factors that Google considers when ranking a page. Visit SEOmoz.org’s report for an interpretation of the concepts and the practical applications contained in Google’s patent application.

Google also applies machine-learning techniques to improve its performance automatically by learning relationships and associations within the stored data. For example, the spelling-correcting system uses such techniques to figure out likely alternative spellings. Google closely guards the formulas it uses to calculate relevance; they’re tweaked to improve quality and performance, and to outwit the latest devious techniques used by spammers.

Indexing the full text of the web allows Google to go beyond simply matching single search terms. Google gives more priority to pages that have search terms near each other and in the same order as the query. Google can also match multi-word phrases and sentences. Since Google indexes HTML code in addition to the text on the page, users can restrict searches on the basis of where query words appear, e.g., in the title, in the URL, in the body, and in links to the page, options offered by Google’s Advanced Search Form and Using Search Operators (Advanced Operators).

Let’s see how Google processes a query.



1. The web server sends the query to the index servers. The content inside the index servers is similar to the index in the back of a book--it tells which pages contain the words that match any particular query term.

2. The query travels to the doc servers, which actually retrieve the stored documents. Snippets are generated to describe each search result.

3. The search results are returned to the user in a fraction of a second.