Robot scouting is important, but choosing the scouting details can be difficult.
There are two common types of scouting. The first type of scouting is pit scouting, where scouts visit the pits, look at the robot, ask questions, and make judgments. Pit scouting may not be very reliable because teams may give an overly optimistic report of their robot. The second type of scouting is match scouting. With more detailed match reporting by FIRST, some details may not be needed.
A needed focus is how the data will be used. Will it be used to decide strategy in a qualification match? For example, Greybots, team 973, would know the opposing teams capabilities long before the match started. If none of the opposition can reach the scale, then that can influence match strategy.
Teams need not do extensive scouting early on. A team could just look ahead to the next match, and check out the oppositions robots. Then, when the team's alliance meets, they can offer a little insight into the opposing alliance.
Notion that only teams in the top 8 (or top 4 in FTC) need to scout because only those teams will be selecting alliance partners.
Some teams will also visit the pits of their future alliance partners and offer help. Teams can help any other team out, but it can be advantageous to focus on alliance mates. 971 has done things such as loaning batteries to alliance mates (some teams do not show up with enough batteries).
A rookie team, Technical Support 7419, had a very developed scouting application, but the application did not present results well. Technical Support was willing to take robots with problems if those problems had apparently been fixed. Bellarmine 254, on the other hand, wanted to see solid reliability. If a robot had trouble in early rounds, then other troubles may arise after early troubles are fixed. Generally, robot teams need to have working hardware early, and they need to exercise that hardware to find and exorcise the bugs.
Develop a form, post the data to a URL, and then present the results. TBA can be used to fill in some details. The primary focus of pit scouting should probably be scoring and defense abilities. Any claims by the scouted team should be evaluated. Alliance mates have claimed to score 12 game pieces, but during an actual match only scored 4. Maybe during one practice run the team scored 12 pieces, but question is how the team performs in a typical match against typical opposition.
Pit scouting can be done on paper. High tech is not needed. Results can be entered into a computer later. Citrus Circuits would scout on tablets, and then use low tech communications to convey that information. One of the problems at matches is teams are not allowed to set up their own WiFi networks.
There are judgments to make. Is the robot well built and rugged or will it fall apart if you breathe on it? Did the designers make reasonable choices? Is the robot heavy and cumbersome for its tasks?
From the pit scouting report, one should be able to estimate the scoring potential. For Deep Space, starting sandstorm in H2 is 6 points, returning to H3 is 12 points for 18 points. A cycle time of 15 seconds gives 8 scores in 120 seconds; if they are mostly cargo, that is 24 points. So scoring potential is 42 points. Data also shows threats such as completing a rocket ship or a 15-point HAB for additional ranking points. That may mean an alliance must defend against such a powerful robot. Alternatively, a cycle time of 30 seconds means 4 scores; if the robot drops hatch panels or cargo or frequently fails to score game pieces, then the robot is not an offensive threat but may become a defender.
For some games, the scoring potential is not so clear. Power Up does not have a clear correlation. A robot may be able to deliver power cubes fast, but if the opposition delivers them faster, the robot will not score any points. Deep Space does have a clear correlation between delivering game pieces and scoring. If a robot places a patch panel somewhere, that is worth 2 points; delivering cargo is worth 3 points.
Weight credibility assessment when was robot ready to drive? Indicates time to debug problems. how much driver practice (would be good to see if team doing practice matches at the venue) Drive train: (CIM, mini CIM, 775, Neo, Falcon) West Coast Drive (center drop, omni wheels) Mecanum Swerve Other top speed (m/s) Vision capability Number of batteries, Number of chargers (shows maturity of the team and expectation) Elevator (fixed stops) Arm Wrist Turret Shooter (single sheel, double wheel, linear, hooded) Many questions are specific to a game. Point scoring Hatch Panels: 0, level 1, 2, or 3; alignment method Hatch Panel from ground. Hatch Panel alignment speed/method/other metric How often drop hatch panel (P{drop hatch panel}) Cargo: 0, level 1, 2, or 3 Pickup cargo from Ground Pickup cargo from Depot (back corner often difficult) Pickup cargo from Alliance Wall cycle time Autodeliver routine (fire and forget) Can tell if bay already loaded? Sandstorm start H1 (3 points) or H2 (6 points) (leap or controlled) Deliver Hatch Panel or Cargo (possibly 2 items during sandstorm) Frame Rate 5 10 15 30 60 Pix Resolution 160, 320, 480, 640, 1920 (questions so we can believe 4 Mb/s data rate) DME CEP 1 cm, 2 cm, 5 cm Vision targets; acquistion time (eg, 170 ms) Flood/Lime light 18 inch white line follower (faster response). Endgame climb method. (piston, reuse intake as travel motor, travel motor, broken ankle, Dukes of Hazard). H1 (3 points), H2 (6 points), H3 (12 points). time needed to climb. (also gives assessment of reliability if they waffle) P{climb} Lift others?
Match scouting shows the robot in action, so the robot's abilities should be more apparent. If it is catatonic, that's a clear problem. If it cannot score or takes a long time to score, that's a problem. If it starts dropping game pieces, that is a problem. One could view match scouting as updating the pit scouting with more accurate information.
Other tactical or strategic information may come out. A robot may have a method of defense that is particularly effective. During Power Up, hitting the hind quarter of the robot just as it was about to deliver a power cube was effective. Instead of blocking, sometimes just getting in the way was good defense. Pinning robots at their alliance wall was also effective; it would take the pinned robot several seconds to recover, and once it had recovered, the defender could reexecute a pin. Such observations are general; they need not apply to a specific team. It might be good to choose a team that defends well, but almost any team can use good tactics.
Some match scouting may be done by FIRST. It looks like the match report has enough information to determine that R1 of the Blue Alliance left the HAB during sandstorm and climbed to level 2 during endgame. If that is the case, then obtain those statistics from FIRST rather than entering them.
There are APIs for both FRC and FTC.
A technical issue is how to secure the passwords used by the various APIs.
The passwords should not be visible in the source.
A good way to do this for a client-side program is not clear.
Cookies are possible, and local storage is also possible.
However, development can be trouble.
I do not see localhost
maintaining local storage
across browser starts.
Server side is simpler.
Some hacks to implement:
During FIRST Events, the field keeps track of many scoring details such as counting balls or deterining the tilt of beams. Referees are also entering data such as whether a robot moved enough during autonomous or fouled.
I am seeing a lot of bad data. Usually it is a small number of matches, but 2022 Arizona North is a mess. qm15, qm16, qm18, qm22, qm26, qm29, qm40, qm41, qm46, qm47, qm48, qm49, qm50, qm51, qm52, qm53, qm54, qm55, qm56, qm57, qm58, qm59, qm60, qm61, qm62, qm63, qm64. See TBA 2022azfl_qm50 and FI AZFL/qualifications/50. During teleop, red alliance scored 5 lower cargo and 2 upper for 10 points; that should be nine points. Is one of the lower cargo being scored as two points (a late autononmous)?
More typically, the bad data matches have cargo counts of zero. Heuneme Port Regional qm15 (I think this was an adjustment). Ventura County Regional qm25, qm32.
FIRST has a interface to obtain match information.
See https://frc-events.firstinspires.org/services/API.
See https://frcevents2.docs.apiary.io/.
The data may be returned as either application/xml
or application/json
.
There is a mock server that does not do validation.
FIRST has stopped using Apiary, so all of this must be revisited.
frc-api-docs.firstinspires.org
The Last-Modified
and If-Modified-Since
headers woul be handled by the caching system.
There are better methods now.
Some APIs doe not provide reasonable maximum age for cached objects.
An Authorization
header is needed.
Authorization: Basic 000000000000000000000000000000000000000000000000000000000000
Where the zeros are username:authorizationKey
,
(e.g., sampleuser:7eaa6338-a097-4221-ac04-b6120fcc4d49
)
but the whole mess is Base64 encoded to get:
Authorization: Basic c2FtcGxldXNlcjo3ZWFhNjMzOC1hMDk3LTQyMjEtYWMwNC1iNjEyMGZjYzRkNDk=
FAILED!
To use the API, do
xhr.setRequestHeader("Authorization", "Basic " + btoa(str));
.
A read from a file:
runs into CORS.
The request uses origin null, so the request dies.
It worked from a server, but now a real key is needed.
FIRST no longer uses Apiary. The API now has a version 3.
The Blue Alliance offers an APIv3 to access team, event, and match information.
The Blue Alliance APIv3.
Tech Talk: Efficiently Querying the TBA API
: Set the X-TBA-Auth-Key: AUTH_KEY_GOES_HERE
header rather than the query string
so the cache will hit with the URL.
Somewhere there should be a flag that says do not cache;
see Top 7 Myths about HTTPS.
Infer abilities. The match data identifies members of the alliance and scoring values for team1, team2, and team3. Is that mapping 1-to-1? That would give us absolute data on autonomous movement and endgame climb. CalGames 2021 with Funky Monkeys might give insight. Also the scoring (phone call - lost train of thought).
What is a good way to handle secret keys? Environment data? Put it in the server data? Put in a cookie? Web Storage API.
Find good way to handle year-specific match data.
Estimating Robot Strengths with Application to Selection of Alliance Members in FIRST Robotics Competitions Alejandro Lim, Chin-Tsang Chiang, Jen-Chieh Teng, November 30, 2018. Discusses mathematical metrics.
Team | Nickname | City | State |
---|
Here we could have the 8 alliances. Double clicking an entry in the table below should add to the next available alliance and remove from the table of available teams. I should be able to do an undo a selection. Maybe repeatedly. In addition, the foot of the table should sum the parameters. After the alliances are selected, we could automatically fill the table.
Teams and their rankings.
Team | OPR | DPR | CCWM |
---|
Trying to give more detailed summary of performance.
Team | Number | Match Data |
---|
Match | Blue | Red | Winner |
---|
Clicking on an above match should provide the detail here. This section could also be a live update.
Tried getting heat map data from 2019 SVR (2019casj), but just got a null. OK, 2019 off season Chezy Champs (2019cc) has heat maps. There are 1521 points in qualifying match 1.