to creat my own resume
This is my resume : https://github.com/supriyasaha/hometaskrepo/blob/master/resume/resume.html
You can find the original link here.
<!DOCTYPE html> <head> <title> Resume </title> </head> <body> <h1> Resume </h1> <div id='information'> <h2> My Name </h2> <p> email@address.com </p> <p> 100 Address St</p> <p> Town, State 01234 </p> <p> (100) 200-5000 </p> </div> <div id='education'> <h2> Education </h2> <p> University Education and whatnot </p> </div> <div id='technical-skills'> <h2> Technical Skills </h2> <h3> Languages </h3> <p> Arabic, English and French </p> <h3>Programming languages</h3> <p>all the programming languages I know</p> </div> <div id='work-experience'> <h2> Work Experience </h2> <h3> First Job </h3> <p> what did I do on that job </p> <p> more stuff </p> <p> even more stuff </p> <h3> Second Job </h3> <p> what did I do on that job as well </p> <p> more stuff </p> </div> </body> </html>
already installed virtual environment
Now to work in a virtual environment we use
[supriya@localhost] ~ $ cd virtual
virtual $ source virt1/bin/activate
(virt1) $ vim b.py
(virt1)$
The code is about printing the title and the author name of each blog post from the blog site 'http://planet.fedoraproject.org'
now if we run the file using ./b.py we output
(virt1)$ ./b.py
title: CatN | CentOS Dojo author: Richard W.M. Jones
title: Daily log July 11th 2013 author: Dave Jones
#!/usr/bin/env python from bs4 import BeautifulSoup import requests import urllib2 url = 'http://planet.fedoraproject.org' html_doc = urllib2.urlopen(url) #extract the html document frm the website data=html_doc.read() #reads the file soup = BeautifulSoup(data) #parse the data title = soup.findAll('div', attrs={'class' : 'blog-entry-title'}) #this extracts the title of each blog post with attribut class='blog-entry-title' and tag 'div' author = soup.findAll('div', attrs={'class' : 'blog-entry-author'})#this extracts the author name of each blog post with attribute class='blog-entry-author' and tag='d iv' length=len(author) #to get the total number of post in the blog for x in range(length): print "title: %s " % title[x].find('a').string #to print the title of each post print "author: %s" % author[x].find('a').string #to print the name of each post html_doc.close()
the link to the code is: https://github.com/supriyasaha/hometaskrepo/blob/master/planetparser/planetparser.py
Html assignment
Problem: To create resume using html
Description: Creating a resume using HTML tags
Below is the link to the program
I installed beautifulsoup4, lxml, and requests modules for this assignment in my 'virt1' environment.
(virt1) $ yolk -l Python - 2.7.5 - active development (/usr/lib/python2.7/lib-dynload) beautifulsoup4 - 4.2.1 - active lxml - 3.2.1 - active pip - 1.3.1 - active requests - 1.2.3 - active setuptools - 0.6c11 - active wsgiref - 0.1.2 - active development (/usr/lib/python2.7) yolk - 0.4.3 - active
This program will read a web page and output blog title and author.
$ python planetparser_rss.py
A link to the source code.
author:pingou title:Le blog de pingou - Tag - Fedora-planet author:pjp title:pjp's blog author:tuxdna title:DNA of the TUX
In the main function, retrieve data from URL and store them into a string.
# fetch data s_url = 'http://planet.fedoraproject.org' f = requests.get(s_url) html_doc = f.text
Using following filter conditions to retrieve blog title & author
# extract title & author tags_header = SoupStrainer(id="people_feeds") soup = BeautifulSoup(html_doc, "lxml", parse_only=tags_header) #print soup for link in soup.select('a[href]'): if link.string or link.get('title'): # except 'None' and 'None' print "author:%s\ttitle:%s" % (link.string, link.get('title'))
#!/usr/bin/env python
import urllib2 #<---------------------------------- the module is imported for opening url
import sys #<-----------------------------------sys module is imported for command line arguments
def share(symbol): #<-------------The function is defined for opening and reading the url, and for printing sharevalue
try: #---------------Used for handle exception and error may be due to problems like url not opening
link = urllib2.urlopen('http://download.finance.yahoo.com/d/quotes.csv?s='+symbol+'&f=l1') #<------opens the url
r = float(link.read()) #<----------------------------------Reads the url
if r == 0.00: #<---------------------if any wrong symbol is entered then it print the share
print "The Nasdaq code entered is wrong"
else:
print "The current sharevalue for the given NASDAQ symbol is %f" % (r)
except:
print "failed to open the finance.yahoo.com url"
if __name__ == '__main__':
if len(sys.argv) !=2:
print "Enter a valid NASDAQ symbol"
sys.exit(1)
else:
share(sys.argv[1])
sys.exit(1)
Code is here: sharevalue.py
How to run the above command:
$python sharevalue.py
The assignment was to display all the blog post title and author from Planet Fedora in the terminal using a virtual environment.
So, for this work first we have to create a virtual environment. First created a temporary directory like virtual and I am already in it. Now we are creating and activating the env:
$ virtualenv virt0 New python executable in virt0/bin/python Installing distribute.............................................................................................................................................................................................done. Installing pip...............done. $ source virt0/bin/activate
Now the termianl will look like:
(virt0)sudip@sudip-mint virtual $
Now the environment is created and we are in it. We need a module named BeautifulSoup to do this job. Let us download it:
$ pip install beautifulsoup Downloading/unpacking beautifulsoup Downloading BeautifulSoup-3.2.1.tar.gz Running setup.py egg_info for package beautifulsoup Installing collected packages: beautifulsoup Running setup.py install for beautifulsoup Successfully installed beautifulsoup Cleaning up...
Now setup is complete.
1 #!/usr/bin/env python 2 from BeautifulSoup import BeautifulSoup 3 import urllib2 4 import sys 5 6 def fetch(): 7 """ 8 Function to fetch data from url 9 """ 10 11 #Fetching html content from Planet Fedora 12 html_cont = urllib2.urlopen('http://planet.fedoraproject.org') 13 data = html_cont.read() 14 html_cont.close() 15 return data 16 17 def make_soup(food): 18 """ 19 Function to make the parse the html document and return the list as desired output 20 21 :arg food: html data 22 """ 23 24 #Using fetched data BeautifulSoup is giving a BeautifulSoup Object as 'soup' 25 soup = BeautifulSoup(food) 26 27 #Finding all the 'div' element with attribute class='string' 28 post_list = soup.findAll('div', attrs={'class' : 'blog-entry-title'}) 29 author_list = soup.findAll('div', attrs={'class' : 'blog-entry-author'}) 30 31 #post_list, author_list: List of required data 32 return post_list, author_list 33 34 def printem(post_name_list, author_list): 35 """ 36 Function to print Post titles and respectives authors 37 """ 38 39 #Initialized counter 40 count = 0 41 42 #Finding how many list element 43 length = len(post_name_list) 44 45 #Looping both post titles and corresponding author 46 while count < length: 47 48 #Finding the text 49 post = post_name_list[count].find('a').string 50 by_author = author_list[count].find('a').string 51 52 #Printing them 53 count += 1 54 print str(count) + ': Post Title: ' + post 55 print ' Author: ' + by_author + '\n' 56 57 if __name__ == '__main__': 58 data=fetch() 59 post, author=make_soup(data) 60 printem(post, author) 61 sys.exit(0)
Run the above script like:
$ ./planetparser.py
or:
$ python planetparser.py
Here example output is given below:
1: Post Title: Week-end hacks Author: Bastien Nocera 2: Post Title: kernel news – 15.07.2013 Author: Rares Aioanei 3: Post Title: morituri 0.2.1 “married” released Author: Thomas Vander Stichele 4: Post Title: Fedora 19 With Google-authenticator login Author: Onuralp SEZER 5: Post Title: Alistando Fedora 19 Release Party Managua Author: Neville A. Cross - YN1V 6: Post Title: How to run Pidora in QEMU Author: Ruth Suehle
Write a python script to print the latest share value of a company whose NASDAQ symbol would be given as command line argument.
#!/usr/bin/env python import urllib2 import sys def share(symbol): try: x = urllib2.urlopen('http://download.finance.yahoo.com/d/quotes.csv?s='+symbol+'&f=l1') value = float(x.read()) if value == 0.00: print "NASDAQ code invalid" else: print "The current sharevalue for %s is %f" % (symbol,value) except: print "failed to open the finance.yahoo.com url. Check your internet connection." if __name__ == '__main__': if len(sys.argv) != 2: print "Invalid Entry. Enter a valid NASDAQ symbol" sys.exit(1) else: share(sys.argv[1]) sys.exit(1)
To go to program click here.
chmod +x sharevalue.py ./sharevalue.py <NASDAQ symbol>
or
python sharevalue.py <NASDAQ symbol>
$ python sharevalue.py GOOG The current sharevalue for GOOG is 923.000000
This script will parse Planet Fedora and output the information from the page in a human readable way to the terminal. You can find the script at the following link.
The first thing that needs to be done is create a virtual environment and install the needed modules. For this script, we need BeautifulSoup.
$ virtualenv pparser New python executable in pparser/bin/python2.7 Also creating executable in pparser/bin/python Installing setuptools............done. Installing pip...............done. $ source pparser/bin/activate (pparser)$ pip install beautifulsoup4 Downloading/unpacking beautifulsoup4 Downloading beautifulsoup4-4.2.1.tar.gz (139Kb): 139Kb downloaded Running setup.py egg_info for package beautifulsoup4 Installing collected packages: beautifulsoup4 Running setup.py install for beautifulsoup4 Successfully installed beautifulsoup4 Cleaning up...
#!/usr/bin/env python """ planetparser is a script that parses the information on http://planet.fedoraproject.org/ and prints the the post title, the author, the link to the original post and the post itself to the terminal """ from urllib import urlopen from sys import exit, argv from bs4 import BeautifulSoup import re def ParseAuthor(link): """ In here, we use a regex to find and output the names of the authors on the whole page. This will return a list of the names """ PatternAuthor = re.compile('<div\sclass="blog-entry\s(.+)">') return re.findall(PatternAuthor, link) def ParsePostTitle(link): """ In here, we use a regex to find and output the post titles on the whole page. This will return a list of the titles """ PatternPostTitle = re.compile('<div\sclass="blog-entry-title">' + '<a\shref=.+>(.+)</a></div>') return re.findall(PatternPostTitle, link) def ParseLink(link): """ In here, we use a regex to find and output the post links on the whole page. This will return a list of the links """ PatternLink = re.compile('<div\sclass="blog-entry-title">' + '<a\shref="(.+)">.+</a></div>') return re.findall(PatternLink, link) def ParsePost(link): """ This function uses BeautifulSoup to find the content of the posts and will return the list of posts in html unchanged """ Soup = BeautifulSoup(link) Posts = Soup.findAll(attrs={"class":"blog-entry-content"}) return Posts def PrintList(ListAuthor, ListPostTitle, ListLink, NoPost ,ListPost=''): """ This function will print out the information given to it in lists in a formatted way to the terminal """ print "" print "Fedora Planet" print "-------------\n" for i in range(len(ListAuthor)): print "Author: %s" % ListAuthor[i] print "Post Title: %s" % ListPostTitle[i] print "Link: %s" % ListLink[i] if NoPost == 0: print "-" * (len(ListLink[i]) + 6) print "\n" # We use .text to get only the text; strip html tags print "\t%s" % ListPost[i].text print "\n" print "*" * 100 print "\n" if __name__ == '__main__': """ The first thing we need to do is open the url and read it We'll raise an exception if this doesn't work for some reason and we'll exit the script """ NoPost = 0 if len(argv) > 2: print "Too many arguments" print "Please use -h or --help for further help" exit(1) if len(argv) == 2: if argv[1] == '-h' or argv[1] == '--help': print "Usage: ./planetparser.py [OPTIONS]" print "Parses Planet Fedora and outputs information from the page.\n" print "Mandatory arguments" print "-h, --help\t\tprint this help page" print "-n, --no-post\t\tdo not print posts" exit(1) elif argv[1] == '-n' or argv[1] == '--no-post': NoPost = 1 else: print "Wrong arguments" print "Please use -h or --help for further help" exit(1) try: link = urlopen("http://planet.fedoraproject.org/").read() except IOError: print "Could not connect to website" print "Please check your connection and try again" exit(1) # Get the list of authors ListAuthor = ParseAuthor(link) # Get the list of post titles ListPostTitle = ParsePostTitle(link) # Get the list of the links ListLink = ParseLink(link) """ If the user does not want to display the posts Don't bother to parse them """ if NoPost == 0: # Get the posts posted on the page ListPost = ParsePost(link) PrintList(ListAuthor, ListPostTitle, ListLink, NoPost, ListPost) # Print the output in a formated manner else : PrintList(ListAuthor, ListPostTitle, ListLink, NoPost) exit(0)
(pparser)$ ./planetparser.py -h Usage: ./planetparser.py [OPTIONS] Parses Planet Fedora and outputs information from the page. Mandatory arguments -h, --help print this help page -n, --no-post do not print posts
(pparser)$ ./planetparser.py -n Fedora Planet ------------- Author: Onuralp SEZER Post Title: Fedora 19 With Google-authenticator login Link: http://thunderbirdtrr.blogspot.com/2013/07/fedora-19-with-google-authenticator.html Author: Neville A. Cross - YN1V Post Title: Alistando Fedora 19 Release Party Managua Link: http://www.taygon.com/?p=827 Author: Ruth Suehle Post Title: How to run Pidora in QEMU Link: http://hobbyhobby.wordpress.com/2013/07/14/how-to-run-pidora-in-qemu/ ...
(pparser)$ ./planetparser.py Fedora Planet ------------- Author: Onuralp SEZER Post Title: Fedora 19 With Google-authenticator login Link: http://thunderbirdtrr.blogspot.com/2013/07/fedora-19-with-google-authenticator.html ----------------------------------------------------------------------------------------- Hello everyone ; Novadays I was thinking about how do I get more secure system on my Fedora 19. (...) **************************************************************************************************** Author: Neville A. Cross - YN1V Post Title: Alistando Fedora 19 Release Party Managua Link: http://www.taygon.com/?p=827 ---------------------------------- Una de las cosas que se espera de un lanzamiento de una nueva versión de Fedora son los discos. (...) ...