]> gitweb.michael.orlitzky.com - dead/htsn-import.git/blob - doc/man1/htsn-import.1
9bc682a2425c95aec59cdf2ddc369813e5a77e38
[dead/htsn-import.git] / doc / man1 / htsn-import.1
1 .TH htsn-import 1
2
3 .SH NAME
4 htsn-import \- Import XML files from The Sports Network into an RDBMS.
5
6 .SH SYNOPSIS
7
8 \fBhtsn-import\fR [OPTIONS] [FILES]
9
10 .SH DESCRIPTION
11 .P
12 The Sports Network <http://www.sportsnetwork.com/> offers an XML feed
13 containing various sports news and statistics. Our sister program
14 \fBhtsn\fR is capable of retrieving the feed and saving the individual
15 XML documents contained therein. But what to do with them?
16 .P
17 The purpose of \fBhtsn-import\fR is to take these XML documents and
18 get them into something we can use, a relational database management
19 system (RDBMS), otherwise known as a SQL database. The structure of
20 relational database, is, well, relational, and the feed XML is not. So
21 there is some work to do before the data can be imported into the
22 database.
23 .P
24 First, we must parse the XML. Each supported document type (see below)
25 has a full pickle/unpickle implementation (\(dqpickle\(dq is simply a
26 synonym for serialize here). That means that we parse the entire
27 document into a data structure, and if we pickle (serialize) that data
28 structure, we get the exact same XML document tha we started with.
29 .P
30 This is important for two reasons. First, it serves as a second level
31 of validation. The first validation is performed by the XML parser,
32 but if that succeeds and unpicking fails, we know that something is
33 fishy. Second, we don't ever want to be surprised by some new element
34 or attribute showing up in the XML. The fact that we can unpickle the
35 whole thing now means that we won't be surprised in the future.
36 .P
37 The aforementioned feature is especially important because we
38 automatically migrate the database schema every time we import a
39 document. If you attempt to import a \(dqnewsxml.dtd\(dq document, all
40 database objects relating to the news will be created if they do not
41 exist. We don't want the schema to change out from under us without
42 warning, so it's important that no XML be parsed that would result in
43 a different schema than we had previously. Since we can
44 pickle/unpickle everything already, this should be impossible.
45
46 .SH SUPPORTED DOCUMENT TYPES
47 .P
48 The XML document types obtained from the feed are uniquely identified
49 by their DTDs. We currently support documents with the following DTDs:
50 .IP \[bu] 2
51 Auto_Racing_Schedule_XML.dtd
52 .IP \[bu]
53 Heartbeat.dtd
54 .IP \[bu]
55 Injuries_Detail_XML.dtd
56 .IP \[bu]
57 injuriesxml.dtd
58 .IP \[bu]
59 newsxml.dtd
60 .IP \[bu]
61 Odds_XML.dtd
62 .IP \[bu]
63 scoresxml.dtd
64 .IP \[bu]
65 weatherxml.dtd
66 .IP \[bu]
67 GameInfo
68 .RS
69 .IP \[bu]
70 CBASK_Lineup_XML.dtd
71 .IP \[bu]
72 cbaskpreviewxml.dtd
73 .IP \[bu]
74 cflpreviewxml.dtd
75 .IP \[bu]
76 Matchup_NBA_NHL_XML.dtd
77 .IP \[bu]
78 MLB_Gaming_Matchup_XML.dtd
79 .IP \[bu]
80 MLB_Lineup_XML.dtd
81 .IP \[bu]
82 MLB_Matchup_XML.dtd
83 .IP \[bu]
84 MLS_Preview_XML.dtd
85 .IP \[bu]
86 mlbpreviewxml.dtd
87 .IP \[bu]
88 NBA_Gaming_Matchup_XML.dtd
89 .IP \[bu]
90 NBA_Playoff_Matchup_XML.dtd
91 .IP \[bu]
92 NBALineupXML.dtd
93 .IP \[bu]
94 nbapreviewxml.dtd
95 .IP \[bu]
96 NCAA_FB_Preview_XML.dtd
97 .IP \[bu]
98 NFL_NCAA_FB_Matchup_XML.dtd
99 .IP \[bu]
100 nflpreviewxml.dtd
101 .IP \[bu]
102 nhlpreviewxml.dtd
103 .IP \[bu]
104 recapxml.dtd
105 .IP \[bu]
106 WorldBaseballPreviewXML.dtd
107 .RE
108 .IP \[bu]
109 SportInfo
110 .RS
111 .IP \[bu]
112 CBASK_3PPctXML.dtd
113 .IP \[bu]
114 Cbask_All_Tourn_Teams_XML.dtd
115 .IP \[bu]
116 CBASK_AssistsXML.dtd
117 .RE
118 .P
119 The GameInfo and SportInfo types do not have their own top-level
120 tables in the database. Instead, their raw XML is stored in either the
121 \(dqgame_info\(dq or \(dqsport_info\(dq table respectively.
122
123 .SH DATABASE SCHEMA
124 .P
125 At the top level (with two notable exceptions), we have one table for
126 each of the XML document types that we import. For example, the
127 documents corresponding to \fInewsxml.dtd\fR will have a table called
128 \(dqnews\(dq. All top-level tables contain two important fields,
129 \(dqxml_file_id\(dq and \(dqtime_stamp\(dq. The former is unique and
130 prevents us from inserting the same data twice. The time stamp on the
131 other hand lets us know when the data is old and can be removed. The
132 database schema make it possible to delete only the outdated top-level
133 records; all transient children should be removed by triggers.
134 .P
135 These top-level tables will often have children. For example, each
136 news item has zero or more locations associated with it. The child
137 table will be named <parent>_<children>, which in this case
138 corresponds to \(dqnews_locations\(dq.
139 .P
140 To relate the two, a third table may exist with name
141 <parent>__<child>. Note the two underscores. This prevents ambiguity
142 when the child table itself contains underscores. The table joining
143 \(dqnews\(dq with \(dqnews_locations\(dq is thus called
144 \(dqnews__news_locations\(dq. This is necessary when the child table
145 has a unique constraint; we don't want to blindly insert duplicate
146 records keyed to the parent. Instead we'd like to use the third table
147 to map an existing child to the new parent.
148 .P
149 Where it makes sense, children are kept unique to prevent pointless
150 duplication. This slows down inserts, and speeds up reads (which are
151 much more frequent). There is a tradeoff to be made, however. For a
152 table with a small, fixed upper bound on the number of rows (like
153 \(dqodds_casinos\(dq), there is great benefit to de-duplication. The
154 total number of rows stays small, so inserts are still quick, and many
155 duplicate rows are eliminated.
156 .P
157 But, with a table like \(dqodds_games\(dq, the number of games grows
158 quickly and without bound. It is therefore more beneficial to be able
159 to delete the old games (through an ON DELETE CASCADE, tied to
160 \(dqodds\(dq) than it is to eliminate duplication. A table like
161 \(dqnews_locations\(dq is somewhere in-between. It is hoped that the
162 unique constraint in the top-level table's \(dqxml_file_id\(dq will
163 prevent duplication in this case anyway.
164 .P
165 The aforementioned exceptions are the \(dqgame_info\(dq and
166 \(dqsport_info\(dq tables. These tables contain the raw XML for a
167 number of DTDs that are not handled individually. This is partially
168 for backwards-compatibility with a legacy implementation, but is
169 mostly a stopgap due to a lack of resources at the moment. These two
170 tables (game_info and sport_info) still possess timestamps that allow
171 us to prune old data.
172 .P
173 UML diagrams of the resulting database schema for each XML document
174 type are provided with the \fBhtsn-import\fR documentation.
175
176 .SH XML Schema Oddities
177 .P
178 There are a number of problems with the XML on the wire. Even if we
179 construct the DTDs ourselves, the results are sometimes
180 inconsistent. Here we document a few of them.
181
182 .IP \[bu] 2
183 Odds_XML.dtd
184
185 The <Notes> elements here are supposed to be associated with a set of
186 <Game> elements, but since the pair
187 (<Notes>...</Notes><Game>...</Game>) can appear zero or more times,
188 this leads to ambiguity in parsing. We therefore ignore the notes
189 entirely (although a hack is employed to facilitate parsing).
190
191 .IP \[bu]
192 weatherxml.dtd
193
194 There appear to be two types of weather documents; the first has
195 <listing> contained within <forecast> and the second has <forecast>
196 contained within <listing>. While it would be possible to parse both,
197 it would greatly complicate things. The first form is more common, so
198 that's all we support for now.
199
200 .SH OPTIONS
201
202 .IP \fB\-\-backend\fR,\ \fB\-b\fR
203 The RDBMS backend to use. Valid choices are \fISqlite\fR and
204 \fIPostgres\fR. Capitalization is important, sorry.
205
206 Default: Sqlite
207
208 .IP \fB\-\-connection-string\fR,\ \fB\-c\fR
209 The connection string used for connecting to the database backend
210 given by the \fB\-\-backend\fR option. The default is appropriate for
211 the \fISqlite\fR backend.
212
213 Default: \(dq:memory:\(dq
214
215 .IP \fB\-\-log-file\fR
216 If you specify a file here, logs will be written to it (possibly in
217 addition to syslog). Can be either a relative or absolute path. It
218 will not be auto-rotated; use something like logrotate for that.
219
220 Default: none
221
222 .IP \fB\-\-log-level\fR
223 How verbose should the logs be? We log notifications at four levels:
224 DEBUG, INFO, WARN, and ERROR. Specify the \(dqmost boring\(dq level of
225 notifications you would like to receive (in all-caps); more
226 interesting notifications will be logged as well. The debug output is
227 extremely verbose and will not be written to syslog even if you try.
228
229 Default: INFO
230
231 .IP \fB\-\-remove\fR,\ \fB\-r\fR
232 Remove successfully processed files. If you enable this, you can see
233 at a glance which XML files are not being processed, because they're
234 all that should be left.
235
236 Default: disabled
237
238 .IP \fB\-\-syslog\fR,\ \fB\-s\fR
239 Enable logging to syslog. On Windows this will attempt to communicate
240 (over UDP) with a syslog daemon on localhost, which will most likely
241 not work.
242
243 Default: disabled
244
245 .SH CONFIGURATION FILE
246 .P
247 Any of the command-line options mentioned above can be specified in a
248 configuration file instead. We first look for \(dqhtsn-importrc\(dq in
249 the system configuration directory. We then look for a file named
250 \(dq.htsn-importrc\(dq in the user's home directory. The latter will
251 override the former.
252 .P
253 The user's home directory is simply $HOME on Unix; on Windows it's
254 wherever %APPDATA% points. The system configuration directory is
255 determined by Cabal; the \(dqsysconfdir\(dq parameter during the
256 \(dqconfigure\(dq step is used.
257 .P
258 The file's syntax is given by examples in the htsn-importrc.example file
259 (included with \fBhtsn-import\fR).
260 .P
261 Options specified on the command-line override those in either
262 configuration file.
263
264 .SH EXAMPLES
265 .IP \[bu] 2
266 Import newsxml.xml into a preexisting sqlite database named \(dqfoo.sqlite3\(dq:
267
268 .nf
269 .I $ htsn-import --connection-string='foo.sqlite3' \\\\
270 .I " test/xml/newsxml.xml"
271 Successfully imported test/xml/newsxml.xml.
272 Imported 1 document(s) total.
273 .fi
274 .IP \[bu]
275 Repeat the previous example, but delete newsxml.xml afterwards:
276
277 .nf
278 .I $ htsn-import --connection-string='foo.sqlite3' \\\\
279 .I " --remove test/xml/newsxml.xml"
280 Successfully imported test/xml/newsxml.xml.
281 Imported 1 document(s) total.
282 Removed processed file test/xml/newsxml.xml.
283 .fi
284 .IP \[bu]
285 Use a Postgres database instead of the default Sqlite. This assumes
286 that you have a database named \(dqhtsn\(dq accessible to user
287 \(dqpostgres\(dq locally:
288
289 .nf
290 .I $ htsn-import --connection-string='dbname=htsn user=postgres' \\\\
291 .I " --backend=Postgres test/xml/newsxml.xml"
292 Successfully imported test/xml/newsxml.xml.
293 Imported 1 document(s) total.
294 .fi
295
296 .SH BUGS
297
298 .P
299 Send bugs to michael@orlitzky.com.