]> gitweb.michael.orlitzky.com - dead/htsn-import.git/blob - doc/man1/htsn-import.1
Fix sports_info -> sport_info in the man page.
[dead/htsn-import.git] / doc / man1 / htsn-import.1
1 .TH htsn-import 1
2
3 .SH NAME
4 htsn-import \- Import XML files from The Sports Network into an RDBMS.
5
6 .SH SYNOPSIS
7
8 \fBhtsn-import\fR [OPTIONS] [FILES]
9
10 .SH DESCRIPTION
11 .P
12 The Sports Network <http://www.sportsnetwork.com/> offers an XML feed
13 containing various sports news and statistics. Our sister program
14 \fBhtsn\fR is capable of retrieving the feed and saving the individual
15 XML documents contained therein. But what to do with them?
16 .P
17 The purpose of \fBhtsn-import\fR is to take these XML documents and
18 get them into something we can use, a relational database management
19 system (RDBMS), otherwise known as a SQL database. The structure of
20 relational database, is, well, relational, and the feed XML is not. So
21 there is some work to do before the data can be imported into the
22 database.
23 .P
24 First, we must parse the XML. Each supported document type (see below)
25 has a full pickle/unpickle implementation (\(dqpickle\(dq is simply a
26 synonym for serialize here). That means that we parse the entire
27 document into a data structure, and if we pickle (serialize) that data
28 structure, we get the exact same XML document tha we started with.
29 .P
30 This is important for two reasons. First, it serves as a second level
31 of validation. The first validation is performed by the XML parser,
32 but if that succeeds and unpicking fails, we know that something is
33 fishy. Second, we don't ever want to be surprised by some new element
34 or attribute showing up in the XML. The fact that we can unpickle the
35 whole thing now means that we won't be surprised in the future.
36 .P
37 The aforementioned feature is especially important because we
38 automatically migrate the database schema every time we import a
39 document. If you attempt to import a \(dqnewsxml.dtd\(dq document, all
40 database objects relating to the news will be created if they do not
41 exist. We don't want the schema to change out from under us without
42 warning, so it's important that no XML be parsed that would result in
43 a different schema than we had previously. Since we can
44 pickle/unpickle everything already, this should be impossible.
45
46 .SH SUPPORTED DOCUMENT TYPES
47 .P
48 The XML document types obtained from the feed are uniquely identified
49 by their DTDs. We currently support documents with the following DTDs:
50 .IP \[bu] 2
51 Auto_Racing_Schedule_XML.dtd
52 .IP \[bu]
53 Heartbeat.dtd
54 .IP \[bu]
55 Injuries_Detail_XML.dtd
56 .IP \[bu]
57 injuriesxml.dtd
58 .IP \[bu]
59 newsxml.dtd
60 .IP \[bu]
61 Odds_XML.dtd
62 .IP \[bu]
63 scoresxml.dtd
64 .IP \[bu]
65 weatherxml.dtd
66 .IP \[bu]
67 GameInfo
68 .RS
69 .IP \[bu]
70 CBASK_Lineup_XML.dtd
71 .IP \[bu]
72 cbaskpreviewxml.dtd
73 .IP \[bu]
74 cflpreviewxml.dtd
75 .IP \[bu]
76 Matchup_NBA_NHL_XML.dtd
77 .IP \[bu]
78 MLB_Gaming_Matchup_XML.dtd
79 .IP \[bu]
80 MLB_Lineup_XML.dtd
81 .IP \[bu]
82 MLB_Matchup_XML.dtd
83 .IP \[bu]
84 MLS_Preview_XML.dtd
85 .IP \[bu]
86 mlbpreviewxml.dtd
87 .IP \[bu]
88 NBA_Gaming_Matchup_XML.dtd
89 .IP \[bu]
90 NBA_Playoff_Matchup_XML.dtd
91 .IP \[bu]
92 NBALineupXML.dtd
93 .IP \[bu]
94 nbapreviewxml.dtd
95 .IP \[bu]
96 NCAA_FB_Preview_XML.dtd
97 .IP \[bu]
98 NFL_NCAA_FB_Matchup_XML.dtd
99 .IP \[bu]
100 nflpreviewxml.dtd
101 .IP \[bu]
102 nhlpreviewxml.dtd
103 .IP \[bu]
104 recapxml.dtd
105 .IP \[bu]
106 WorldBaseballPreviewXML.dtd
107 .RE
108 .P
109 The GameInfo and SportInfo types do not have their own top-level
110 tables in the database. Instead, their raw XML is stored in either the
111 \(dqgame_info\(dq or \(dqsport_info\(dq table respectively.
112
113 .SH DATABASE SCHEMA
114 .P
115 At the top level (with two notable exceptions), we have one table for
116 each of the XML document types that we import. For example, the
117 documents corresponding to \fInewsxml.dtd\fR will have a table called
118 \(dqnews\(dq. All top-level tables contain two important fields,
119 \(dqxml_file_id\(dq and \(dqtime_stamp\(dq. The former is unique and
120 prevents us from inserting the same data twice. The time stamp on the
121 other hand lets us know when the data is old and can be removed. The
122 database schema make it possible to delete only the outdated top-level
123 records; all transient children should be removed by triggers.
124 .P
125 These top-level tables will often have children. For example, each
126 news item has zero or more locations associated with it. The child
127 table will be named <parent>_<children>, which in this case
128 corresponds to \(dqnews_locations\(dq.
129 .P
130 To relate the two, a third table may exist with name
131 <parent>__<child>. Note the two underscores. This prevents ambiguity
132 when the child table itself contains underscores. The table joining
133 \(dqnews\(dq with \(dqnews_locations\(dq is thus called
134 \(dqnews__news_locations\(dq. This is necessary when the child table
135 has a unique constraint; we don't want to blindly insert duplicate
136 records keyed to the parent. Instead we'd like to use the third table
137 to map an existing child to the new parent.
138 .P
139 Where it makes sense, children are kept unique to prevent pointless
140 duplication. This slows down inserts, and speeds up reads (which are
141 much more frequent). There is a tradeoff to be made, however. For a
142 table with a small, fixed upper bound on the number of rows (like
143 \(dqodds_casinos\(dq), there is great benefit to de-duplication. The
144 total number of rows stays small, so inserts are still quick, and many
145 duplicate rows are eliminated.
146 .P
147 But, with a table like \(dqodds_games\(dq, the number of games grows
148 quickly and without bound. It is therefore more beneficial to be able
149 to delete the old games (through an ON DELETE CASCADE, tied to
150 \(dqodds\(dq) than it is to eliminate duplication. A table like
151 \(dqnews_locations\(dq is somewhere in-between. It is hoped that the
152 unique constraint in the top-level table's \(dqxml_file_id\(dq will
153 prevent duplication in this case anyway.
154 .P
155 The aforementioned exceptions are the \(dqgame_info\(dq and
156 \(dqsport_info\(dq tables. These tables contain the raw XML for a
157 number of DTDs that are not handled individually. This is partially
158 for backwards-compatibility with a legacy implementation, but is
159 mostly a stopgap due to a lack of resources at the moment. These two
160 tables (game_info and sport_info) still possess timestamps that allow
161 us to prune old data.
162 .P
163 UML diagrams of the resulting database schema for each XML document
164 type are provided with the \fBhtsn-import\fR documentation.
165
166 .SH XML Schema Oddities
167 .P
168 There are a number of problems with the XML on the wire. Even if we
169 construct the DTDs ourselves, the results are sometimes
170 inconsistent. Here we document a few of them.
171
172 .IP \[bu] 2
173 Odds_XML.dtd
174
175 The <Notes> elements here are supposed to be associated with a set of
176 <Game> elements, but since the pair
177 (<Notes>...</Notes><Game>...</Game>) can appear zero or more times,
178 this leads to ambiguity in parsing. We therefore ignore the notes
179 entirely (although a hack is employed to facilitate parsing).
180
181 .IP \[bu]
182 weatherxml.dtd
183
184 There appear to be two types of weather documents; the first has
185 <listing> contained within <forecast> and the second has <forecast>
186 contained within <listing>. While it would be possible to parse both,
187 it would greatly complicate things. The first form is more common, so
188 that's all we support for now.
189
190 .SH OPTIONS
191
192 .IP \fB\-\-backend\fR,\ \fB\-b\fR
193 The RDBMS backend to use. Valid choices are \fISqlite\fR and
194 \fIPostgres\fR. Capitalization is important, sorry.
195
196 Default: Sqlite
197
198 .IP \fB\-\-connection-string\fR,\ \fB\-c\fR
199 The connection string used for connecting to the database backend
200 given by the \fB\-\-backend\fR option. The default is appropriate for
201 the \fISqlite\fR backend.
202
203 Default: \(dq:memory:\(dq
204
205 .IP \fB\-\-log-file\fR
206 If you specify a file here, logs will be written to it (possibly in
207 addition to syslog). Can be either a relative or absolute path. It
208 will not be auto-rotated; use something like logrotate for that.
209
210 Default: none
211
212 .IP \fB\-\-log-level\fR
213 How verbose should the logs be? We log notifications at four levels:
214 DEBUG, INFO, WARN, and ERROR. Specify the \(dqmost boring\(dq level of
215 notifications you would like to receive (in all-caps); more
216 interesting notifications will be logged as well. The debug output is
217 extremely verbose and will not be written to syslog even if you try.
218
219 Default: INFO
220
221 .IP \fB\-\-remove\fR,\ \fB\-r\fR
222 Remove successfully processed files. If you enable this, you can see
223 at a glance which XML files are not being processed, because they're
224 all that should be left.
225
226 Default: disabled
227
228 .IP \fB\-\-syslog\fR,\ \fB\-s\fR
229 Enable logging to syslog. On Windows this will attempt to communicate
230 (over UDP) with a syslog daemon on localhost, which will most likely
231 not work.
232
233 Default: disabled
234
235 .SH CONFIGURATION FILE
236 .P
237 Any of the command-line options mentioned above can be specified in a
238 configuration file instead. We first look for \(dqhtsn-importrc\(dq in
239 the system configuration directory. We then look for a file named
240 \(dq.htsn-importrc\(dq in the user's home directory. The latter will
241 override the former.
242 .P
243 The user's home directory is simply $HOME on Unix; on Windows it's
244 wherever %APPDATA% points. The system configuration directory is
245 determined by Cabal; the \(dqsysconfdir\(dq parameter during the
246 \(dqconfigure\(dq step is used.
247 .P
248 The file's syntax is given by examples in the htsn-importrc.example file
249 (included with \fBhtsn-import\fR).
250 .P
251 Options specified on the command-line override those in either
252 configuration file.
253
254 .SH EXAMPLES
255 .IP \[bu] 2
256 Import newsxml.xml into a preexisting sqlite database named \(dqfoo.sqlite3\(dq:
257
258 .nf
259 .I $ htsn-import --connection-string='foo.sqlite3' \\\\
260 .I " test/xml/newsxml.xml"
261 Successfully imported test/xml/newsxml.xml.
262 Imported 1 document(s) total.
263 .fi
264 .IP \[bu]
265 Repeat the previous example, but delete newsxml.xml afterwards:
266
267 .nf
268 .I $ htsn-import --connection-string='foo.sqlite3' \\\\
269 .I " --remove test/xml/newsxml.xml"
270 Successfully imported test/xml/newsxml.xml.
271 Imported 1 document(s) total.
272 Removed processed file test/xml/newsxml.xml.
273 .fi
274 .IP \[bu]
275 Use a Postgres database instead of the default Sqlite. This assumes
276 that you have a database named \(dqhtsn\(dq accessible to user
277 \(dqpostgres\(dq locally:
278
279 .nf
280 .I $ htsn-import --connection-string='dbname=htsn user=postgres' \\\\
281 .I " --backend=Postgres test/xml/newsxml.xml"
282 Successfully imported test/xml/newsxml.xml.
283 Imported 1 document(s) total.
284 .fi
285
286 .SH BUGS
287
288 .P
289 Send bugs to michael@orlitzky.com.